Web Video Event Recognition by Semantic Analysis From Ubiquitous Documents.
Yu, Litao; Yang, Yang; Huang, Zi; Wang, Peng; Song, Jingkuan; Shen, Heng Tao
2016-12-01
In recent years, the task of event recognition from videos has attracted increasing interest in multimedia area. While most of the existing research was mainly focused on exploring visual cues to handle relatively small-granular events, it is difficult to directly analyze video content without any prior knowledge. Therefore, synthesizing both the visual and semantic analysis is a natural way for video event understanding. In this paper, we study the problem of Web video event recognition, where Web videos often describe large-granular events and carry limited textual information. Key challenges include how to accurately represent event semantics from incomplete textual information and how to effectively explore the correlation between visual and textual cues for video event understanding. We propose a novel framework to perform complex event recognition from Web videos. In order to compensate the insufficient expressive power of visual cues, we construct an event knowledge base by deeply mining semantic information from ubiquitous Web documents. This event knowledge base is capable of describing each event with comprehensive semantics. By utilizing this base, the textual cues for a video can be significantly enriched. Furthermore, we introduce a two-view adaptive regression model, which explores the intrinsic correlation between the visual and textual cues of the videos to learn reliable classifiers. Extensive experiments on two real-world video data sets show the effectiveness of our proposed framework and prove that the event knowledge base indeed helps improve the performance of Web video event recognition.
Scientific Visualization to Study Flux Transfer Events at the Community Coordinated Modeling Center
NASA Technical Reports Server (NTRS)
Rastatter, Lutz; Kuznetsova, Maria M.; Sibeck, David G.; Berrios, David H.
2011-01-01
In this paper we present results of modeling of reconnection at the dayside magnetopause with subsequent development of flux transfer event signatures. The tools used include new methods that have been added to the suite of visualization methods that are used at the Community Coordinated Modeling Center (CCMC). Flux transfer events result from localized reconnection that connect magnetosheath magnetic field and plasma with magnetospheric fields and plasma and results in flux rope structures that span the dayside magnetopause. The onset of flux rope formation and the three-dimensional structure of flux ropes are studied as they have been modeled by high-resolution magnetohydrodynamic simulations of the dayside magnetosphere of the Earth. We show that flux transfer events are complex three-dimensional structures that require modern visualization and analysis techniques. Two suites of visualization methods are presented and we demonstrate the usefulness of those methods through the CCMC web site to the general science user.
Metusalem, Ross; Kutas, Marta; Urbach, Thomas P; Elman, Jeffrey L
2016-04-01
During incremental language comprehension, the brain activates knowledge of described events, including knowledge elements that constitute semantic anomalies in their linguistic context. The present study investigates hemispheric asymmetries in this process, with the aim of advancing our understanding of the neural basis and functional properties of event knowledge activation during incremental comprehension. In a visual half-field event-related brain potential (ERP) experiment, participants read brief discourses in which the third sentence contained a word that was either highly expected, semantically anomalous but related to the described event (Event-Related), or semantically anomalous but unrelated to the described event (Event-Unrelated). For both visual fields of target word presentation, semantically anomalous words elicited N400 ERP components of greater amplitude than did expected words. Crucially, Event-Related anomalous words elicited a reduced N400 relative to Event-Unrelated anomalous words only with left visual field/right hemisphere presentation. This result suggests that right hemisphere processes are critical to the activation of event knowledge elements that violate the linguistic context, and in doing so informs existing theories of hemispheric asymmetries in semantic processing during language comprehension. Additionally, this finding coincides with past research suggesting a crucial role for the right hemisphere in elaborative inference generation, raises interesting questions regarding hemispheric coordination in generating event-specific linguistic expectancies, and more generally highlights the possibility of functional dissociation of event knowledge activation for the generation of elaborative inferences and for linguistic expectancies. Copyright © 2016 Elsevier Ltd. All rights reserved.
Visual Indicators on Vaccine Boxes as Early Warning Tools to Identify Potential Freeze Damage.
Angoff, Ronald; Wood, Jillian; Chernock, Maria C; Tipping, Diane
2015-07-01
The aim of this study was to determine whether the use of visual freeze indicators on vaccines would assist health care providers in identifying vaccines that may have been exposed to potentially damaging temperatures. Twenty-seven sites in Connecticut involved in the Vaccine for Children Program participated. In addition to standard procedures, visual freeze indicators (FREEZEmarker ® L; Temptime Corporation, Morris Plains, NJ) were affixed to each box of vaccine that required refrigeration but must not be frozen. Temperatures were monitored twice daily. During the 24 weeks, all 27 sites experienced triggered visual freeze indicator events in 40 of the 45 refrigerators. A total of 66 triggered freeze indicator events occurred in all 4 types of refrigerators used. Only 1 of the freeze events was identified by a temperature-monitoring device. Temperatures recorded on vaccine data logs before freeze indicator events were within the 35°F to 46°F (2°C to 8°C) range in all but 1 instance. A total of 46,954 doses of freeze-sensitive vaccine were stored at the time of a visual freeze indicator event. Triggered visual freeze indicators were found on boxes containing 6566 doses (14.0% of total doses). Of all doses stored, 14,323 doses (30.5%) were of highly freeze-sensitive vaccine; 1789 of these doses (12.5%) had triggered indicators on the boxes. Visual freeze indicators are useful in the early identification of freeze events involving vaccines. Consideration should be given to including these devices as a component of the temperature-monitoring system for vaccines.
Potheegadoo, Jevita; Berna, Fabrice; Cuervo-Lombard, Christine; Danion, Jean-Marie
2013-10-01
There is growing interest in clinical research regarding the visual perspective adopted during memory retrieval, because it reflects individuals' self-attitude towards their memories of past personal events. Several autobiographical memory deficits, including low specificity of personal memories, have been identified in schizophrenia, but visual perspective during autobiographical memory retrieval has not yet been investigated in patients. The aim of this study was therefore to investigate the visual perspective with which patients visualize themselves when recalling autobiographical memories and to assess the specificity of their memories which is a major determinant of visual perspective. Thirty patients with schizophrenia and 30 matched controls recalled personal events from 4 life periods. After each recall, they were asked to report their visual perspective (Field or Observer) associated with the event. The specificity of their memories was assessed by independent raters. Our results showed that patients reported significantly fewer Field perspectives than comparison participants. Patients' memories, whether recalled with Field or Observer perspectives, were less specific and less detailed. Our results indicate that patients with schizophrenia adopt Field perspectives less frequently than comparison participants, and that this may contribute to a weakened sense of the individual of being an actor of his past events, and hence to a reduced sense of self. They suggest that this may be related to low specificity of memories and that all the important aspects involved in re-experiencing autobiographical events are impaired in patients with schizophrenia. © 2013 Elsevier B.V. All rights reserved.
Metusalem, Ross; Kutas, Marta; Urbach, Thomas P.; Elman, Jeffrey L.
2016-01-01
During incremental language comprehension, the brain activates knowledge of described events, including knowledge elements that constitute semantic anomalies in their linguistic context. The present study investigates hemispheric asymmetries in this process, with the aim of advancing our understanding of the neural basis and functional properties of event knowledge activation during incremental comprehension. In a visual half-field event-related brain potential (ERP) experiment, participants read brief discourses in which the third sentence contained a word that was either highly expected, semantically anomalous but related to the described event, or semantically anomalous but unrelated to the described event. For both visual fields of target word presentation, semantically anomalous words elicited N400 ERP components of greater amplitude than did expected words. Crucially, event-related anomalous words elicited a reduced N400 relative to event-unrelated anomalous words only with left visual field/right hemisphere presentation. This result suggests that right hemisphere processes are critical to the activation of event knowledge elements that violate the linguistic context, and in doing so informs existing theories of hemispheric asymmetries in semantic processing during language comprehension. Additionally, this finding coincides with past research suggesting a crucial role for the right hemisphere in elaborative inference generation, raises interesting questions regarding hemispheric coordination in generating event-specific linguistic expectancies, and more generally highlights the possibility of functional dissociation between event knowledge activation for the generation of elaborative inferences and for linguistic expectancies. PMID:26878980
Kim, Seokyeon; Jeong, Seongmin; Woo, Insoo; Jang, Yun; Maciejewski, Ross; Ebert, David S
2018-03-01
Geographic visualization research has focused on a variety of techniques to represent and explore spatiotemporal data. The goal of those techniques is to enable users to explore events and interactions over space and time in order to facilitate the discovery of patterns, anomalies and relationships within the data. However, it is difficult to extract and visualize data flow patterns over time for non-directional statistical data without trajectory information. In this work, we develop a novel flow analysis technique to extract, represent, and analyze flow maps of non-directional spatiotemporal data unaccompanied by trajectory information. We estimate a continuous distribution of these events over space and time, and extract flow fields for spatial and temporal changes utilizing a gravity model. Then, we visualize the spatiotemporal patterns in the data by employing flow visualization techniques. The user is presented with temporal trends of geo-referenced discrete events on a map. As such, overall spatiotemporal data flow patterns help users analyze geo-referenced temporal events, such as disease outbreaks, crime patterns, etc. To validate our model, we discard the trajectory information in an origin-destination dataset and apply our technique to the data and compare the derived trajectories and the original. Finally, we present spatiotemporal trend analysis for statistical datasets including twitter data, maritime search and rescue events, and syndromic surveillance.
Visualizing protein interactions and dynamics: evolving a visual language for molecular animation.
Jenkinson, Jodie; McGill, Gaël
2012-01-01
Undergraduate biology education provides students with a number of learning challenges. Subject areas that are particularly difficult to understand include protein conformational change and stability, diffusion and random molecular motion, and molecular crowding. In this study, we examined the relative effectiveness of three-dimensional visualization techniques for learning about protein conformation and molecular motion in association with a ligand-receptor binding event. Increasingly complex versions of the same binding event were depicted in each of four animated treatments. Students (n = 131) were recruited from the undergraduate biology program at University of Toronto, Mississauga. Visualization media were developed in the Center for Molecular and Cellular Dynamics at Harvard Medical School. Stem cell factor ligand and cKit receptor tyrosine kinase were used as a classical example of a ligand-induced receptor dimerization and activation event. Each group completed a pretest, viewed one of four variants of the animation, and completed a posttest and, at 2 wk following the assessment, a delayed posttest. Overall, the most complex animation was the most effective at fostering students' understanding of the events depicted. These results suggest that, in select learning contexts, increasingly complex representations may be more desirable for conveying the dynamic nature of cell binding events.
Overview of EVE - the event visualization environment of ROOT
NASA Astrophysics Data System (ADS)
Tadel, Matevž
2010-04-01
EVE is a high-level visualization library using ROOT's data-processing, GUI and OpenGL interfaces. It is designed as a framework for object management offering hierarchical data organization, object interaction and visualization via GUI and OpenGL representations. Automatic creation of 2D projected views is also supported. On the other hand, it can serve as an event visualization toolkit satisfying most HEP requirements: visualization of geometry, simulated and reconstructed data such as hits, clusters, tracks and calorimeter information. Special classes are available for visualization of raw-data. Object-interaction layer allows for easy selection and highlighting of objects and their derived representations (projections) across several views (3D, Rho-Z, R-Phi). Object-specific tooltips are provided in both GUI and GL views. The visual-configuration layer of EVE is built around a data-base of template objects that can be applied to specific instances of visualization objects to ensure consistent object presentation. The data-base can be retrieved from a file, edited during the framework operation and stored to file. EVE prototype was developed within the ALICE collaboration and has been included into ROOT in December 2007. Since then all EVE components have reached maturity. EVE is used as the base of AliEve visualization framework in ALICE, Firework physics-oriented event-display in CMS, and as the visualization engine of FairRoot in FAIR.
Eventogram: A Visual Representation of Main Events in Biomedical Signals.
Elgendi, Mohamed
2016-09-22
Biomedical signals carry valuable physiological information and many researchers have difficulty interpreting and analyzing long-term, one-dimensional, quasi-periodic biomedical signals. Traditionally, biomedical signals are analyzed and visualized using periodogram, spectrogram, and wavelet methods. However, these methods do not offer an informative visualization of main events within the processed signal. This paper attempts to provide an event-related framework to overcome the drawbacks of the traditional visualization methods and describe the main events within the biomedical signal in terms of duration and morphology. Electrocardiogram and photoplethysmogram signals are used in the analysis to demonstrate the differences between the traditional visualization methods, and their performance is compared against the proposed method, referred to as the " eventogram " in this paper. The proposed method is based on two event-related moving averages that visualizes the main time-domain events in the processed biomedical signals. The traditional visualization methods were unable to find dominant events in processed signals while the eventogram was able to visualize dominant events in signals in terms of duration and morphology. Moreover, eventogram -based detection algorithms succeeded with detecting main events in different biomedical signals with a sensitivity and positive predictivity >95%. The output of the eventogram captured unique patterns and signatures of physiological events, which could be used to visualize and identify abnormal waveforms in any quasi-periodic signal.
Visualizing Protein Interactions and Dynamics: Evolving a Visual Language for Molecular Animation
Jenkinson, Jodie; McGill, Gaël
2012-01-01
Undergraduate biology education provides students with a number of learning challenges. Subject areas that are particularly difficult to understand include protein conformational change and stability, diffusion and random molecular motion, and molecular crowding. In this study, we examined the relative effectiveness of three-dimensional visualization techniques for learning about protein conformation and molecular motion in association with a ligand–receptor binding event. Increasingly complex versions of the same binding event were depicted in each of four animated treatments. Students (n = 131) were recruited from the undergraduate biology program at University of Toronto, Mississauga. Visualization media were developed in the Center for Molecular and Cellular Dynamics at Harvard Medical School. Stem cell factor ligand and cKit receptor tyrosine kinase were used as a classical example of a ligand-induced receptor dimerization and activation event. Each group completed a pretest, viewed one of four variants of the animation, and completed a posttest and, at 2 wk following the assessment, a delayed posttest. Overall, the most complex animation was the most effective at fostering students' understanding of the events depicted. These results suggest that, in select learning contexts, increasingly complex representations may be more desirable for conveying the dynamic nature of cell binding events. PMID:22383622
Noninvasive studies of human visual cortex using neuromagnetic techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aine, C.J.; George, J.S.; Supek, S.
1990-01-01
The major goals of noninvasive studies of the human visual cortex are: to increase knowledge of the functional organization of cortical visual pathways; and to develop noninvasive clinical tests for the assessment of cortical function. Noninvasive techniques suitable for studies of the structure and function of human visual cortex include magnetic resonance imaging (MRI), positron emission tomography (PET), single photon emission tomography (SPECT), scalp recorded event-related potentials (ERPs), and event-related magnetic fields (ERFs). The primary challenge faced by noninvasive functional measures is to optimize the spatial and temporal resolution of the measurement and analytic techniques in order to effectively characterizemore » the spatial and temporal variations in patterns of neuronal activity. In this paper we review the use of neuromagnetic techniques for this purpose. 8 refs., 3 figs.« less
Toward a hybrid brain-computer interface based on repetitive visual stimuli with missing events.
Wu, Yingying; Li, Man; Wang, Jing
2016-07-26
Steady-state visually evoked potentials (SSVEPs) can be elicited by repetitive stimuli and extracted in the frequency domain with satisfied performance. However, the temporal information of such stimulus is often ignored. In this study, we utilized repetitive visual stimuli with missing events to present a novel hybrid BCI paradigm based on SSVEP and omitted stimulus potential (OSP). Four discs flickering from black to white with missing flickers served as visual stimulators to simultaneously elicit subject's SSVEPs and OSPs. Key parameters in the new paradigm, including flicker frequency, optimal electrodes, missing flicker duration and intervals of missing events were qualitatively discussed with offline data. Two omitted flicker patterns including missing black/white disc were proposed and compared. Averaging times were optimized with Information Transfer Rate (ITR) in online experiments, where SSVEPs and OSPs were identified using Canonical Correlation Analysis in the frequency domain and Support Vector Machine (SVM)-Bayes fusion in the time domain, respectively. The online accuracy and ITR (mean ± standard deviation) over nine healthy subjects were 79.29 ± 18.14 % and 19.45 ± 11.99 bits/min with missing black disc pattern, and 86.82 ± 12.91 % and 24.06 ± 10.95 bits/min with missing white disc pattern, respectively. The proposed BCI paradigm, for the first time, demonstrated that SSVEPs and OSPs can be simultaneously elicited in single visual stimulus pattern and recognized in real-time with satisfied performance. Besides the frequency features such as SSVEP elicited by repetitive stimuli, we found a new feature (OSP) in the time domain to design a novel hybrid BCI paradigm by adding missing events in repetitive stimuli.
Nardo, Davide; Console, Paola; Reverberi, Carlo; Macaluso, Emiliano
2016-01-01
In daily life the brain is exposed to a large amount of external signals that compete for processing resources. The attentional system can select relevant information based on many possible combinations of goal-directed and stimulus-driven control signals. Here, we investigate the behavioral and physiological effects of competition between distinctive visual events during free-viewing of naturalistic videos. Nineteen healthy subjects underwent functional magnetic resonance imaging (fMRI) while viewing short video-clips of everyday life situations, without any explicit goal-directed task. Each video contained either a single semantically-relevant event on the left or right side (Lat-trials), or multiple distinctive events in both hemifields (Multi-trials). For each video, we computed a salience index to quantify the lateralization bias due to stimulus-driven signals, and a gaze index (based on eye-tracking data) to quantify the efficacy of the stimuli in capturing attention to either side. Behaviorally, our results showed that stimulus-driven salience influenced spatial orienting only in presence of multiple competing events (Multi-trials). fMRI results showed that the processing of competing events engaged the ventral attention network, including the right temporoparietal junction (R TPJ) and the right inferior frontal cortex. Salience was found to modulate activity in the visual cortex, but only in the presence of competing events; while the orienting efficacy of Multi-trials affected activity in both the visual cortex and posterior parietal cortex (PPC). We conclude that in presence of multiple competing events, the ventral attention system detects semantically-relevant events, while regions of the dorsal system make use of saliency signals to select relevant locations and guide spatial orienting. PMID:27445760
Lebib, Riadh; Papo, David; de Bode, Stella; Baudonnière, Pierre Marie
2003-05-08
We investigated the existence of a cross-modal sensory gating reflected by the modulation of an early electrophysiological index, the P50 component. We analyzed event-related brain potentials elicited by audiovisual speech stimuli manipulated along two dimensions: congruency and discriminability. The results showed that the P50 was attenuated when visual and auditory speech information were redundant (i.e. congruent), in comparison with this same event-related potential component elicited with discrepant audiovisual dubbing. When hard to discriminate, however, bimodal incongruent speech stimuli elicited a similar pattern of P50 attenuation. We concluded to the existence of a visual-to-auditory cross-modal sensory gating phenomenon. These results corroborate previous findings revealing a very early audiovisual interaction during speech perception. Finally, we postulated that the sensory gating system included a cross-modal dimension.
Visualization of Traffic Accidents
NASA Technical Reports Server (NTRS)
Wang, Jie; Shen, Yuzhong; Khattak, Asad
2010-01-01
Traffic accidents have tremendous impact on society. Annually approximately 6.4 million vehicle accidents are reported by police in the US and nearly half of them result in catastrophic injuries. Visualizations of traffic accidents using geographic information systems (GIS) greatly facilitate handling and analysis of traffic accidents in many aspects. Environmental Systems Research Institute (ESRI), Inc. is the world leader in GIS research and development. ArcGIS, a software package developed by ESRI, has the capabilities to display events associated with a road network, such as accident locations, and pavement quality. But when event locations related to a road network are processed, the existing algorithm used by ArcGIS does not utilize all the information related to the routes of the road network and produces erroneous visualization results of event locations. This software bug causes serious problems for applications in which accurate location information is critical for emergency responses, such as traffic accidents. This paper aims to address this problem and proposes an improved method that utilizes all relevant information of traffic accidents, namely, route number, direction, and mile post, and extracts correct event locations for accurate traffic accident visualization and analysis. The proposed method generates a new shape file for traffic accidents and displays them on top of the existing road network in ArcGIS. Visualization of traffic accidents along Hampton Roads Bridge Tunnel is included to demonstrate the effectiveness of the proposed method.
Payal, Abhishek R; Gonzalez-Gonzalez, Luis A; Chen, Xi; Cakiner-Egilmez, Tulay; Chomsky, Amy; Baze, Elizabeth; Vollman, David; Lawrence, Mary G; Daly, Mary K
2016-03-01
To explore visual outcomes, functional visual improvement, and events in resident-operated cataract surgery cases. Veterans Affairs Ophthalmic Surgery Outcomes Database Project across 5 Veterans Affairs Medical Centers. Retrospective data analysis of deidentified data. Cataract surgery cases with residents as primary surgeons were analyzed for logMAR corrected distance visual acuity (CDVA) and vision-related quality of life (VRQL) measured by the modified National Eye Institute Vision Function Questionnaire and 30 intraoperative and postoperative events. In some analyses, cases without events (Group A) were compared with cases with events (Group B). The study included 4221 cataract surgery cases. Preoperative to postoperative CDVA improved significantly in both groups (P < .0001), although the level of improvement was less in Group B (P = .03). A CDVA of 20/40 or better was achieved in 96.64% in Group A and 88.25% in Group B (P < .0001); however, Group B had a higher prevalence of preoperative ocular comorbidities (P < .0001). Cases with 1 or more events were associated with a higher likelihood of a postoperative CDVA worse than 20/40 (odds ratio, 3.82; 95% confidence interval, 2.92-5.05; P < .0001) than those who did not experience an event. Both groups had a significant increase in VRQL from preoperative levels (both P < .0001); however, the level of preoperative to postoperative VRQL improvement was significantly less in Group B (P < .0001). Resident-operated cases with and without events had an overall significant improvement in visual acuity and visual function compared with preoperatively, although this improvement was less marked in those that had an event. None of the authors has a financial or proprietary interest in any material or method mentioned. Copyright © 2016 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Learning rational temporal eye movement strategies.
Hoppe, David; Rothkopf, Constantin A
2016-07-19
During active behavior humans redirect their gaze several times every second within the visual environment. Where we look within static images is highly efficient, as quantified by computational models of human gaze shifts in visual search and face recognition tasks. However, when we shift gaze is mostly unknown despite its fundamental importance for survival in a dynamic world. It has been suggested that during naturalistic visuomotor behavior gaze deployment is coordinated with task-relevant events, often predictive of future events, and studies in sportsmen suggest that timing of eye movements is learned. Here we establish that humans efficiently learn to adjust the timing of eye movements in response to environmental regularities when monitoring locations in the visual scene to detect probabilistically occurring events. To detect the events humans adopt strategies that can be understood through a computational model that includes perceptual and acting uncertainties, a minimal processing time, and, crucially, the intrinsic costs of gaze behavior. Thus, subjects traded off event detection rate with behavioral costs of carrying out eye movements. Remarkably, based on this rational bounded actor model the time course of learning the gaze strategies is fully explained by an optimal Bayesian learner with humans' characteristic uncertainty in time estimation, the well-known scalar law of biological timing. Taken together, these findings establish that the human visual system is highly efficient in learning temporal regularities in the environment and that it can use these regularities to control the timing of eye movements to detect behaviorally relevant events.
77 FR 16688 - Review of the Emergency Alert System
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-22
... the approximately three and one half-year window it is providing for intermediary device users is..., including the originator, event, location and the valid time period of the EAS message, from the CAP text... event, which it believes would provide more visual information to alert message viewers. The Commission...
Bailey, C; Chakravarthy, U; Lotery, A; Menon, G; Talks, J; Bailey, Clare; Kamal, Aintree; Ghanchi, Faruque; Khan, Calderdale; Johnston, Robert; McKibbin, Martin; Varma, Atul; Mustaq, Bushra; Brand, Christopher; Talks, James; Glover,, Nick
2017-01-01
Aims To compare safety outcomes and visual function data acquired in the real-world setting with FAME study results in eyes treated with 0.2 μg/day fluocinolone acetonide (FAc). Methods Fourteen UK clinical sites contributed to pseudoanonymised data collected using the same electronic medical record system. Data pertaining to eyes treated with FAc implant for diabetic macular oedema (DMO) was extracted. Intraocular pressure (IOP)-related adverse events were defined as use of IOP-lowering medication, any rise in IOP>30 mm Hg, or glaucoma surgery. Other measured outcomes included visual acuity, central subfield thickness (CSFT) changes and use of concomitant medications. Results In total, 345 eyes had a mean follow-up of 428 days. Overall, 13.9% of patients required IOP-lowering drops (included initiation, addition and switching of current drops), 7.2% had IOP elevation >30 mm Hg and 0.3% required glaucoma surgery. In patients with prior steroid exposure and no prior IOP-related event, there were no new IOP-related events. In patients without prior steroid use and without prior IOP-related events, 10.3% of eyes required IOP-lowering medication and 4.3% exhibited IOP >30 mm Hg at some point during follow-up. At 24 months, mean best-recorded visual acuity increased from 51.9 to 57.2 letters and 20.8% achieved ≥15-letter improvement. Mean CSFT reduced from 451.2 to 355.5 μm. Conclusions While overall IOP-related emergent events were observed in similar frequency to FAME, no adverse events were seen in the subgroup with prior steroid exposure and no prior IOP events. Efficacy findings confirm that the FAc implant is a useful treatment option for chronic DMO. PMID:28737758
Cohn, Neil; Paczynski, Martin
2013-01-01
Agents consistently appear prior to Patients in sentences, manual signs, and drawings, and Agents are responded to faster when presented in visual depictions of events. We hypothesized that this “Agent advantage” reflects Agents’ role in event structure. We investigated this question by manipulating the depictions of Agents and Patients in preparatory actions in a wordless visual narrative. We found that Agents elicited a greater degree of predictions regarding upcoming events than Patients, that Agents are viewed longer than Patients, independent of serial order, and that visual depictions of actions are processed more quickly following the presentation of an Agent versus a Patient. Taken together these findings support the notion that Agents initiate the building of event representation. We suggest that Agent First orders facilitate the interpretation of events as they unfold and that the saliency of Agents within visual representations of events is driven by anticipation of upcoming events. PMID:23959023
Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.
Stone, Scott A; Tata, Matthew S
2017-01-01
Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.
Rendering visual events as sounds: Spatial attention capture by auditory augmented reality
Tata, Matthew S.
2017-01-01
Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible. PMID:28792518
Antón, Alfonso; Pazos, Marta; Martín, Belén; Navero, José Manuel; Ayala, Miriam Eleonora; Castany, Marta; Martínez, Patricia; Bardavío, Javier
2013-01-01
To assess sensitivity, specificity, and agreement among automated event analysis, automated trend analysis, and expert evaluation to detect glaucoma progression. This was a prospective study that included 37 eyes with a follow-up of 36 months. All had glaucomatous disks and fields and performed reliable visual fields every 6 months. Each series of fields was assessed with 3 different methods: subjective assessment by 2 independent teams of glaucoma experts, glaucoma/guided progression analysis (GPA) event analysis, and GPA (visual field index-based) trend analysis. Kappa agreement coefficient between methods and sensitivity and specificity for each method using expert opinion as gold standard were calculated. The incidence of glaucoma progression was 16% to 18% in 3 years but only 3 cases showed progression with all 3 methods. Kappa agreement coefficient was high (k=0.82) between subjective expert assessment and GPA event analysis, and only moderate between these two and GPA trend analysis (k=0.57). Sensitivity and specificity for GPA event and GPA trend analysis were 71% and 96%, and 57% and 93%, respectively. The 3 methods detected similar numbers of progressing cases. The GPA event analysis and expert subjective assessment showed high agreement between them and moderate agreement with GPA trend analysis. In a period of 3 years, both methods of GPA analysis offered high specificity, event analysis showed 83% sensitivity, and trend analysis had a 66% sensitivity.
Filling in the Gaps: Memory Implications for Inferring Missing Content in Graphic Narratives
ERIC Educational Resources Information Center
Magliano, Joseph P.; Kopp, Kristopher; Higgs, Karyn; Rapp, David N.
2017-01-01
Visual narratives, including graphic novels, illustrated instructions, and picture books, convey event sequences constituting a plot but cannot depict all events that make up the plot. Viewers must generate inferences that fill the gaps between explicitly shown images. This study explored the inferential products and memory implications of…
Research on Visual Analysis Methods of Terrorism Events
NASA Astrophysics Data System (ADS)
Guo, Wenyue; Liu, Haiyan; Yu, Anzhu; Li, Jing
2016-06-01
Under the situation that terrorism events occur more and more frequency throughout the world, improving the response capability of social security incidents has become an important aspect to test governments govern ability. Visual analysis has become an important method of event analysing for its advantage of intuitive and effective. To analyse events' spatio-temporal distribution characteristics, correlations among event items and the development trend, terrorism event's spatio-temporal characteristics are discussed. Suitable event data table structure based on "5W" theory is designed. Then, six types of visual analysis are purposed, and how to use thematic map and statistical charts to realize visual analysis on terrorism events is studied. Finally, experiments have been carried out by using the data provided by Global Terrorism Database, and the results of experiments proves the availability of the methods.
Cohn, Neil; Paczynski, Martin
2013-11-01
Agents consistently appear prior to Patients in sentences, manual signs, and drawings, and Agents are responded to faster when presented in visual depictions of events. We hypothesized that this "Agent advantage" reflects Agents' role in event structure. We investigated this question by manipulating the depictions of Agents and Patients in preparatory actions in wordless visual narratives. We found that Agents elicited a greater degree of predictions regarding upcoming events than Patients, that Agents are viewed longer than Patients, independent of serial order, and that visual depictions of actions are processed more quickly following the presentation of an Agent vs. a Patient. Taken together these findings support the notion that Agents initiate the building of event representation. We suggest that Agent First orders facilitate the interpretation of events as they unfold and that the saliency of Agents within visual representations of events is driven by anticipation of upcoming events. Copyright © 2013 Elsevier Inc. All rights reserved.
VizieR Online Data Catalog: OGLE-II DIA microlensing events (Wozniak+, 2001)
NASA Astrophysics Data System (ADS)
Wozniak, P. R.; Udalski, A.; Szymanski, M.; Kubiak, M.; Pietrzynski, G.; Soszynski, I.; Zebrun, K.
2002-11-01
We present a sample of microlensing events discovered in the Difference Image Analysis (DIA) of the OGLE-II images collected during three observing seasons, 1997-1999. 4424 light curves pass our criteria on the presence of a brightening episode on top of a constant baseline. Among those, 512 candidate microlensing events were selected visually. We designed an automated procedure, which unambiguously selects up to 237 best events. Including eight candidate events recovered by other means, a total of 520 light curves are presented in this work. (4 data files).
3D Simulation of External Flooding Events for the RISMC Pathway
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prescott, Steven; Mandelli, Diego; Sampath, Ramprasad
2015-09-01
Incorporating 3D simulations as part of the Risk-Informed Safety Margins Characterization (RISMIC) Toolkit allows analysts to obtain a more complete picture of complex system behavior for events including external plant hazards. External events such as flooding have become more important recently – however these can be analyzed with existing and validated simulated physics toolkits. In this report, we describe these approaches specific to flooding-based analysis using an approach called Smoothed Particle Hydrodynamics. The theory, validation, and example applications of the 3D flooding simulation are described. Integrating these 3D simulation methods into computational risk analysis provides a spatial/visual aspect to themore » design, improves the realism of results, and can prove visual understanding to validate the analysis of flooding.« less
Visual adaptation and novelty responses in the superior colliculus
Boehnke, Susan E.; Berg, David J.; Marino, Robert M.; Baldi, Pierre F.; Itti, Laurent; Munoz, Douglas P.
2011-01-01
The brain's ability to ignore repeating, often redundant, information while enhancing novel information processing is paramount to survival. When stimuli are repeatedly presented, the response of visually-sensitive neurons decreases in magnitude, i.e. neurons adapt or habituate, although the mechanism is not yet known. We monitored activity of visual neurons in the superior colliculus (SC) of rhesus monkeys who actively fixated while repeated visual events were presented. We dissociated adaptation from habituation as mechanisms of the response decrement by using a Bayesian model of adaptation, and by employing a paradigm including rare trials that included an oddball stimulus that was either brighter or dimmer. If the mechanism is adaptation, response recovery should be seen only for the brighter stimulus; if habituation, response recovery (‘dishabituation’) should be seen for both the brighter and dimmer stimulus. We observed a reduction in the magnitude of the initial transient response and an increase in response onset latency with stimulus repetition for all visually responsive neurons in the SC. Response decrement was successfully captured by the adaptation model which also predicted the effects of presentation rate and rare luminance changes. However, in a subset of neurons with sustained activity to visual stimuli, a novelty signal akin to dishabituation was observed late in the visual response profile to both brighter and dimmer stimuli and was not captured by the model. This suggests that SC neurons integrate both rapidly discounted information about repeating stimuli and novelty information about oddball events, to support efficient selection in a cluttered dynamic world. PMID:21864319
Audio in Courseware: Design Knowledge Issues.
ERIC Educational Resources Information Center
Aarntzen, Diana
1993-01-01
Considers issues that need to be addressed when incorporating audio in courseware design. Topics discussed include functions of audio in courseware; the relationship between auditive and visual information; learner characteristics in relation to audio; events of instruction; and audio characteristics, including interactivity and speech technology.…
The Pivotal Role of the Right Parietal Lobe in Temporal Attention.
Agosta, Sara; Magnago, Denise; Tyler, Sarah; Grossman, Emily; Galante, Emanuela; Ferraro, Francesco; Mazzini, Nunzia; Miceli, Gabriele; Battelli, Lorella
2017-05-01
The visual system is extremely efficient at detecting events across time even at very fast presentation rates; however, discriminating the identity of those events is much slower and requires attention over time, a mechanism with a much coarser resolution [Cavanagh, P., Battelli, L., & Holcombe, A. O. Dynamic attention. In A. C. Nobre & S. Kastner (Eds.), The Oxford handbook of attention (pp. 652-675). Oxford: Oxford University Press, 2013]. Patients affected by right parietal lesion, including the TPJ, are severely impaired in discriminating events across time in both visual fields [Battelli, L., Cavanagh, P., & Thornton, I. M. Perception of biological motion in parietal patients. Neuropsychologia, 41, 1808-1816, 2003]. One way to test this ability is to use a simultaneity judgment task, whereby participants are asked to indicate whether two events occurred simultaneously or not. We psychophysically varied the frequency rate of four flickering disks, and on most of the trials, one disk (either in the left or right visual field) was flickering out-of-phase relative to the others. We asked participants to report whether two left-or-right-presented disks were simultaneous or not. We tested a total of 23 right and left parietal lesion patients in Experiment 1, and only right parietal patients showed impairment in both visual fields while their low-level visual functions were normal. Importantly, to causally link the right TPJ to the relative timing processing, we ran a TMS experiment on healthy participants. Participants underwent three stimulation sessions and performed the same simultaneity judgment task before and after 20 min of low-frequency inhibitory TMS over right TPJ, left TPJ, or early visual area as a control. rTMS over the right TPJ caused a bilateral impairment in the simultaneity judgment task, whereas rTMS over left TPJ or over early visual area did not affect performance. Altogether, our results directly link the right TPJ to the processing of relative time.
Remembering from any angle: The flexibility of visual perspective during retrieval
Rice, Heather J.; Rubin, David C.
2010-01-01
When recalling autobiographical memories, individuals often experience visual images associated with the event. These images can be constructed from two different perspectives: first person, in which the event is visualized from the viewpoint experienced at encoding, or third person, in which the event is visualized from an external vantage point. Using a novel technique to measure visual perspective, we examined where the external vantage point is situated in third-person images. Individuals in two studies were asked to recall either 10 or 15 events from their lives and describe the perspectives they experienced. Wide variation in spatial locations was observed within third-person perspectives, with the location of these perspectives depending on the event being recalled. Results suggest remembering from an external viewpoint may be more common than previous studies have demonstrated. PMID:21109466
EventThread: Visual Summarization and Stage Analysis of Event Sequence Data.
Guo, Shunan; Xu, Ke; Zhao, Rongwen; Gotz, David; Zha, Hongyuan; Cao, Nan
2018-01-01
Event sequence data such as electronic health records, a person's academic records, or car service records, are ordered series of events which have occurred over a period of time. Analyzing collections of event sequences can reveal common or semantically important sequential patterns. For example, event sequence analysis might reveal frequently used care plans for treating a disease, typical publishing patterns of professors, and the patterns of service that result in a well-maintained car. It is challenging, however, to visually explore large numbers of event sequences, or sequences with large numbers of event types. Existing methods focus on extracting explicitly matching patterns of events using statistical analysis to create stages of event progression over time. However, these methods fail to capture latent clusters of similar but not identical evolutions of event sequences. In this paper, we introduce a novel visualization system named EventThread which clusters event sequences into threads based on tensor analysis and visualizes the latent stage categories and evolution patterns by interactively grouping the threads by similarity into time-specific clusters. We demonstrate the effectiveness of EventThread through usage scenarios in three different application domains and via interviews with an expert user.
Hierarchical event selection for video storyboards with a case study on snooker video visualization.
Parry, Matthew L; Legg, Philip A; Chung, David H S; Griffiths, Iwan W; Chen, Min
2011-12-01
Video storyboard, which is a form of video visualization, summarizes the major events in a video using illustrative visualization. There are three main technical challenges in creating a video storyboard, (a) event classification, (b) event selection and (c) event illustration. Among these challenges, (a) is highly application-dependent and requires a significant amount of application specific semantics to be encoded in a system or manually specified by users. This paper focuses on challenges (b) and (c). In particular, we present a framework for hierarchical event representation, and an importance-based selection algorithm for supporting the creation of a video storyboard from a video. We consider the storyboard to be an event summarization for the whole video, whilst each individual illustration on the board is also an event summarization but for a smaller time window. We utilized a 3D visualization template for depicting and annotating events in illustrations. To demonstrate the concepts and algorithms developed, we use Snooker video visualization as a case study, because it has a concrete and agreeable set of semantic definitions for events and can make use of existing techniques of event detection and 3D reconstruction in a reliable manner. Nevertheless, most of our concepts and algorithms developed for challenges (b) and (c) can be applied to other application areas. © 2010 IEEE
Event processing in the visual world: Projected motion paths during spoken sentence comprehension.
Kamide, Yuki; Lindsay, Shane; Scheepers, Christoph; Kukona, Anuenue
2016-05-01
Motion events in language describe the movement of an entity to another location along a path. In 2 eye-tracking experiments, we found that comprehension of motion events involves the online construction of a spatial mental model that integrates language with the visual world. In Experiment 1, participants listened to sentences describing the movement of an agent to a goal while viewing visual scenes depicting the agent, goal, and empty space in between. Crucially, verbs suggested either upward (e.g., jump) or downward (e.g., crawl) paths. We found that in the rare event of fixating the empty space between the agent and goal, visual attention was biased upward or downward in line with the verb. In Experiment 2, visual scenes depicted a central obstruction, which imposed further constraints on the paths and increased the likelihood of fixating the empty space between the agent and goal. The results from this experiment corroborated and refined the previous findings. Specifically, eye-movement effects started immediately after hearing the verb and were in line with data from an additional mouse-tracking task that encouraged a more explicit spatial reenactment of the motion event. In revealing how event comprehension operates in the visual world, these findings suggest a mental simulation process whereby spatial details of motion events are mapped onto the world through visual attention. The strength and detectability of such effects in overt eye-movements is constrained by the visual world and the fact that perceivers rarely fixate regions of empty space. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Lyon, A. L.; Kowalkowski, J. B.; Jones, C. D.
2017-10-01
ParaView is a high performance visualization application not widely used in High Energy Physics (HEP). It is a long standing open source project led by Kitware and involves several Department of Energy (DOE) and Department of Defense (DOD) laboratories. Futhermore, it has been adopted by many DOE supercomputing centers and other sites. ParaView is unique in speed and efficiency by using state-of-the-art techniques developed by the academic visualization community that are often not found in applications written by the HEP community. In-situ visualization of events, where event details are visualized during processing/analysis, is a common task for experiment software frameworks. Kitware supplies Catalyst, a library that enables scientific software to serve visualization objects to client ParaView viewers yielding a real-time event display. Connecting ParaView to the Fermilab art framework will be described and the capabilities it brings discussed.
Older drivers and rapid deceleration events: Salisbury Eye Evaluation Driving Study.
Keay, Lisa; Munoz, Beatriz; Duncan, Donald D; Hahn, Daniel; Baldwin, Kevin; Turano, Kathleen A; Munro, Cynthia A; Bandeen-Roche, Karen; West, Sheila K
2013-09-01
Drivers who rapidly change speed while driving may be more at risk for a crash. We sought to determine the relationship of demographic, vision, and cognitive variables with episodes of rapid decelerations during five days of normal driving in a cohort of older drivers. In the Salisbury Eye Evaluation Driving Study, 1425 older drivers aged 67-87 were recruited from the Maryland Motor Vehicle Administration's rolls for licensees in Salisbury, Maryland. Participants had several measures of vision tested: visual acuity, contrast sensitivity, visual fields, and the attentional visual field. Participants were also tested for various domains of cognitive function including executive function, attention, psychomotor speed, and visual search. A custom created driving monitoring system (DMS) was used to capture rapid deceleration events (RDEs), defined as at least 350 milli-g deceleration, during a five day period of monitoring. The rate of RDE per mile driven was modeled using a negative binomial regression model with an offset of the logarithm of the number of miles driven. We found that 30% of older drivers had one or more RDE during a five day period, and of those, about 1/3 had four or more. The rate of RDE per mile driven was highest for those drivers driving<59 miles during the 5-day period of monitoring. However, older drivers with RDE's were more likely to have better scores in cognitive tests of psychomotor speed and visual search, and have faster brake reaction time. Further, greater average speed and maximum speed per driving segment was protective against RDE events. In conclusion, contrary to our hypothesis, older drivers who perform rapid decelerations tend to be more "fit", with better measures of vision and cognition compared to those who do not have events of rapid deceleration. Copyright © 2012 Elsevier Ltd. All rights reserved.
Older Drivers and Rapid Deceleration Events: Salisbury Eye Evaluation Driving Study
Keay, Lisa; Munoz, Beatriz; Duncan, Donald D; Hahn, Daniel; Baldwin, Kevin; Turano, Kathleen A; Munro, Cynthia A; Bandeen-Roche, Karen; West, Sheila K
2012-01-01
Drivers who rapidly change speed while driving may be more at risk for a crash. We sought to determine the relationship of demographic, vision, and cognitive variables with episodes of rapid decelerations during five days of normal driving in a cohort of older drivers. In the Salisbury Eye Evaluation Driving Study, 1425 older drivers ages 67 to 87 were recruited from the Maryland Motor Vehicle Administration’s rolls for licensees in Salisbury, Maryland. Participants had several measures of vision tested: visual acuity, contrast sensitivity, visual fields, and the attentional visual field. Participants were also tested for various domains of cognitive function including executive function, attention, psychomotor speed, and visual search. A custom created Driving Monitor System (DMS) was used to capture rapid deceleration events (RDE), defined as at least 350 milli-g deceleration, during a five day period of monitoring. The rate of RDE per mile driven was modeled using a negative binomial regression model with an offset of the logarithm of the number of miles driven. We found that 30% of older drivers had one or more RDE during a five day period, and of those, about 1/3 had four or more. The rate of RDE per mile driven was highest for those drivers driving <59 miles during the 5-day period of monitoring. However, older drivers with RDE’s were more likely to have better scores in cognitive tests of psychomotor speed and visual search, and have faster brake reaction time. Further, greater average speed and maximum speed per driving segment was protective against RDE events. In conclusion, contrary to our hypothesis, older drivers who perform rapid decelerations tend to be more “fit”, with better measures of vision and cognition compared to those who do not have events of rapid deceleration. PMID:22742775
Cohn, Neil; Kutas, Marta
2015-01-01
Inference has long been emphasized in the comprehension of verbal and visual narratives. Here, we measured event-related brain potentials to visual sequences designed to elicit inferential processing. In Impoverished sequences, an expressionless “onlooker” watches an undepicted event (e.g., person throws a ball for a dog, then watches the dog chase it) just prior to a surprising finale (e.g., someone else returns the ball), which should lead to an inference (i.e., the different person retrieved the ball). Implied sequences alter this narrative structure by adding visual cues to the critical panel such as a surprised facial expression to the onlooker implying they saw an unexpected, albeit undepicted, event. In contrast, Expected sequences show a predictable, but then confounded, event (i.e., dog retrieves ball, then different person returns it), and Explicit sequences depict the unexpected event (i.e., different person retrieves then returns ball). At the critical penultimate panel, sequences representing depicted events (Explicit, Expected) elicited a larger posterior positivity (P600) than the relatively passive events of an onlooker (Impoverished, Implied), though Implied sequences were slightly more positive than Impoverished sequences. At the subsequent and final panel, a posterior positivity (P600) was greater to images in Impoverished sequences than those in Explicit and Implied sequences, which did not differ. In addition, both sequence types requiring inference (Implied, Impoverished) elicited a larger frontal negativity than those explicitly depicting events (Expected, Explicit). These results show that neural processing differs for visual narratives omitting events versus those depicting events, and that the presence of subtle visual cues can modulate such effects presumably by altering narrative structure. PMID:26320706
Knoeferle, Pia; Crocker, Matthew W; Scheepers, Christoph; Pickering, Martin J
2005-02-01
Studies monitoring eye-movements in scenes containing entities have provided robust evidence for incremental reference resolution processes. This paper addresses the less studied question of whether depicted event scenes can affect processes of incremental thematic role-assignment. In Experiments 1 and 2, participants inspected agent-action-patient events while listening to German verb-second sentences with initial structural and role ambiguity. The experiments investigated the time course with which listeners could resolve this ambiguity by relating the verb to the depicted events. Such verb-mediated visual event information allowed early disambiguation on-line, as evidenced by anticipatory eye-movements to the appropriate agent/patient role filler. We replicated this finding while investigating the effects of intonation. Experiment 3 demonstrated that when the verb was sentence-final and thus did not establish early reference to the depicted events, linguistic cues alone enabled disambiguation before people encountered the verb. Our results reveal the on-line influence of depicted events on incremental thematic role-assignment and disambiguation of local structural and role ambiguity. In consequence, our findings require a notion of reference that includes actions and events in addition to entities (e.g. Semantics and Cognition, 1983), and argue for a theory of on-line sentence comprehension that exploits a rich inventory of semantic categories.
Helping Educators Find Visualizations and Teaching Materials Just-in-Time
NASA Astrophysics Data System (ADS)
McDaris, J.; Manduca, C. A.; MacDonald, R. H.
2005-12-01
Major events and natural disasters like hurricanes and tsunamis provide geoscience educators with powerful teachable moments to engage their students with class content. In order to take advantage of these opportunities, educators need quality topical resources related to current earth science events. The web has become an excellent vehicle for disseminating this type of resource. In response to the 2004 Indian Ocean Earthquake and to Hurricane Katrina's devastating impact on the US Gulf Coast, the On the Cutting Edge professional development program developed collections of visualizations for use in teaching. (serc.carleton.edu/NAGTWorkshops/visualization/collections/ tsunami.html,serc.carleton.edu/NAGTWorkshops/visualization/ collections/hurricanes.html). These sites are collections of links to visualizations and other materials that can support the efforts of faculty, teachers, and those engaged in public outreach. They bring together resources created by researchers, government agencies and respected media sources and organize them for easy use by educators. Links are selected to provide a variety of different types of visualizations (e.g photographic images, animations, satellite imagery) and to assist educators in teaching about the geologic event reported in the news, associated Earth science concepts, and related topics of high interest. The cited links are selected from quality sources and are reviewed by SERC staff before being included on the page. Geoscience educators are encouraged to recommend links and supporting materials and to comment on the available resources. In this way the collection becomes more complete and its quality is enhanced.. These sites have received substantial use (Tsunami - 77,000 visitors in the first 3 months, Hurricanes - 2500 visitors in the first week) indicating that in addition to use by educators, they are being used by the general public seeking information about the events. Thus they provide an effective mechanism for guiding the public to quality resources created by geoscience researchers and facilities, in addition to supporting incorporation of geoscience research in education.
Management Modalities for Traumatic Macular Hole: A Systematic Review and Single-Arm Meta-Analysis.
Gao, Min; Liu, Kun; Lin, Qiurong; Liu, Haiyun
2017-02-01
The purposes of this study were to (i) determine macular hole (MH) closure rates and visual outcomes by comparing two methods of managing traumatic MH (TMH)-an event resulting in severe loss of visual acuity (VA); (ii) characterize patients who undergo spontaneous TMH closure; (iii) determine which TMH patients should be observed before resorting to surgical repair; and (iv) elucidate factors that influence postoperative visual outcomes. Studies (n=10) of patients who were managed by surgery or observation for TMH were meta-analyzed retrospectively. Management modalities included surgical repair (surgery group) and observation for spontaneous hole closure (observation group). In addition, a 12-case series of articles (1990-2014) on spontaneous hole closure was statistically summarized. SAS and Comprehensive Meta-Analysis (CMA) (version 3.0) were used for analysis. For surgery group patients, the fixed-model pooled event rate for hole closure was 0.919 (range, 0.861-0.954) and for observation group patients, 0.368 (range, 0.236-0.448). The random-model pooled event rate for improvement of visual acuity (VA) for surgery group patients was 0.748 (range, 0.610-0.849) and for observation group patients, 0.505 (range, 0.397-0.613). For patients in both groups, the mean age of spontaneous closure was 18.71±10.64 years; mean size of TMHs, 0.18±0.06 decimal degrees (DD); and mean time for hole closure, 3.38±3.08 months. The pooled event rate for visual improvement was 0.748 (0.610-0.849). Hole closure and VA improvement rates of surgery group patients were significantly higher than those for observation group patients. Patients of ≤ 24 years of age with MH sizes of ≤ 0.2DD were more likely to achieve spontaneous hole closure. The interval of time from injury to surgery was statistically significantly associated with the level of visual improvement.
It's all connected: Pathways in visual object recognition and early noun learning.
Smith, Linda B
2013-11-01
A developmental pathway may be defined as the route, or chain of events, through which a new structure or function forms. For many human behaviors, including object name learning and visual object recognition, these pathways are often complex and multicausal and include unexpected dependencies. This article presents three principles of development that suggest the value of a developmental psychology that explicitly seeks to trace these pathways and uses empirical evidence on developmental dependencies among motor development, action on objects, visual object recognition, and object name learning in 12- to 24-month-old infants to make the case. The article concludes with a consideration of the theoretical implications of this approach. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
High performance visual display for HENP detectors
NASA Astrophysics Data System (ADS)
McGuigan, Michael; Smith, Gordon; Spiletic, John; Fine, Valeri; Nevski, Pavel
2001-08-01
A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactive control, including the ability to slice, search and mark areas of the detector. We incorporate the ability to make a high quality still image of a view of the detector and the ability to generate animations and a fly through of the detector and output these to MPEG or VRML models. We develop data compression hardware and software so that remote interactive visualization will be possible among dispersed collaborators. We obtain real time visual display for events accumulated during simulations.
Ellenbogen, Ravid; Meiran, Nachshon
2011-02-01
The backward-compatibility effect (BCE) is a major index of parallel processing in dual tasks and is related to the dependency of Task 1 performance on Task 2 response codes (Hommel, 1998). The results of four dual-task experiments showed that a BCE occurs when the stimuli of both tasks are included in the same visual object (Experiments 1 and 2) or belong to the same perceptual event (Experiments 3 and 4). Thus, the BCE may be modulated by factors that influence whether both task stimuli are included in the same perceptual event (objects, as studied in cognitive experiments, being special cases of events). As with objects, drawing attention to a (selected) event results in the processing of its irrelevant features and may interfere with task execution. (c) 2010 APA, all rights reserved.
2012-06-11
places, resources, knowledge sets or other common Node Classes*. 285 This example will use the Stargate dataset (SG-1). This dataset is included...create a new Meta-Network. Below is the NodeSet for Stargate with the original 16 node NodeSet. 376 From the main menu select, Actions > Add...measures by simply gauging their size visually and intuitively. First, visualize one of your networks. Below is the Stargate agent x event network to
Ohyama, Junji; Watanabe, Katsumi
2016-01-01
We examined how the temporal and spatial predictability of a task-irrelevant visual event affects the detection and memory of a visual item embedded in a continuously changing sequence. Participants observed 11 sequentially presented letters, during which a task-irrelevant visual event was either present or absent. Predictabilities of spatial location and temporal position of the event were controlled in 2 × 2 conditions. In the spatially predictable conditions, the event occurred at the same location within the stimulus sequence or at another location, while, in the spatially unpredictable conditions, it occurred at random locations. In the temporally predictable conditions, the event timing was fixed relative to the order of the letters, while in the temporally unpredictable condition; it could not be predicted from the letter order. Participants performed a working memory task and a target detection reaction time (RT) task. Memory accuracy was higher for a letter simultaneously presented at the same location as the event in the temporally unpredictable conditions, irrespective of the spatial predictability of the event. On the other hand, the detection RTs were only faster for a letter simultaneously presented at the same location as the event when the event was both temporally and spatially predictable. Thus, to facilitate ongoing detection processes, an event must be predictable both in space and time, while memory processes are enhanced by temporally unpredictable (i.e., surprising) events. Evidently, temporal predictability has differential effects on detection and memory of a visual item embedded in a sequence of images. PMID:26869966
Ohyama, Junji; Watanabe, Katsumi
2016-01-01
We examined how the temporal and spatial predictability of a task-irrelevant visual event affects the detection and memory of a visual item embedded in a continuously changing sequence. Participants observed 11 sequentially presented letters, during which a task-irrelevant visual event was either present or absent. Predictabilities of spatial location and temporal position of the event were controlled in 2 × 2 conditions. In the spatially predictable conditions, the event occurred at the same location within the stimulus sequence or at another location, while, in the spatially unpredictable conditions, it occurred at random locations. In the temporally predictable conditions, the event timing was fixed relative to the order of the letters, while in the temporally unpredictable condition; it could not be predicted from the letter order. Participants performed a working memory task and a target detection reaction time (RT) task. Memory accuracy was higher for a letter simultaneously presented at the same location as the event in the temporally unpredictable conditions, irrespective of the spatial predictability of the event. On the other hand, the detection RTs were only faster for a letter simultaneously presented at the same location as the event when the event was both temporally and spatially predictable. Thus, to facilitate ongoing detection processes, an event must be predictable both in space and time, while memory processes are enhanced by temporally unpredictable (i.e., surprising) events. Evidently, temporal predictability has differential effects on detection and memory of a visual item embedded in a sequence of images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, Ian B.; Arendt, Dustin L.; Bell, Eric B.
Language in social media is extremely dynamic: new words emerge, trend and disappear, while the meaning of existing words can fluctuate over time. This work addresses several important tasks of visualizing and predicting short term text representation shift, i.e. the change in a word’s contextual semantics. We study the relationship between short-term concept drift and representation shift on a large social media corpus – VKontakte collected during the Russia-Ukraine crisis in 2014 – 2015. We visualize short-term representation shift for example keywords and build predictive models to forecast short-term shifts in meaning from previous meaning as well as from conceptmore » drift. We show that short-term representation shift can be accurately predicted up to several weeks in advance and that visualization provides insight into meaning change. Our approach can be used to explore and characterize specific aspects of the streaming corpus during crisis events and potentially improve other downstream classification tasks including real-time event forecasting in social media.« less
Margolin, Edward; Gujar, Sachin K; Trobe, Jonathan D
2007-12-01
A 16-year-old boy who was briefly asystolic and hypotensive after a motor vehicle accident complained of abnormal vision after recovering consciousness. Visual acuity was normal, but visual fields were severely constricted without clear hemianopic features. The ophthalmic examination was otherwise normal. Brain MRI performed 11 days after the accident showed no pertinent abnormalities. At 6 months after the event, brain MRI demonstrated brain volume loss in the primary visual cortex and no other abnormalities. One year later, visual fields remained severely constricted; neurologic examination, including formal neuropsychometric testing, was normal. This case emphasizes the fact that hypoxic-ischemic encephalopathy (HIE) may cause enduring damage limited to primary visual cortex and that the MRI abnormalities may be subtle. These phenomena should be recognized in the management of patients with HIE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyon, A. L.; Kowalkowski, J. B.; Jones, C. D.
ParaView is a high performance visualization application not widely used in High Energy Physics (HEP). It is a long standing open source project led by Kitware and involves several Department of Energy (DOE) and Department of Defense (DOD) laboratories. Futhermore, it has been adopted by many DOE supercomputing centers and other sites. ParaView is unique in speed and efficiency by using state-of-the-art techniques developed by the academic visualization community that are often not found in applications written by the HEP community. In-situ visualization of events, where event details are visualized during processing/analysis, is a common task for experiment software frameworks.more » Kitware supplies Catalyst, a library that enables scientific software to serve visualization objects to client ParaView viewers yielding a real-time event display. Connecting ParaView to the Fermilab art framework will be described and the capabilities it brings discussed.« less
Flom, Ross; Bahrick, Lorraine E
2010-03-01
This research examined the effects of bimodal audiovisual and unimodal visual stimulation on infants' memory for the visual orientation of a moving toy hammer following a 5-min, 2-week, or 1-month retention interval. According to the intersensory redundancy hypothesis (L. E. Bahrick & R. Lickliter, 2000; L. E. Bahrick, R. Lickliter, & R. Flom, 2004) detection of and memory for nonredundantly specified properties, including the visual orientation of an event, are facilitated in unimodal stimulation and attenuated in bimodal stimulation in early development. Later in development, however, nonredundantly specified properties can be perceived and remembered in both multimodal and unimodal stimulation. The current study extended tests of these predictions to the domain of memory in infants of 3, 5, and 9 months of age. Consistent with predictions of the intersensory redundancy hypothesis, in unimodal stimulation, memory for visual orientation emerged by 5 months and remained stable across age, whereas in bimodal stimulation, memory did not emerge until 9 months of age. Memory for orientation was evident even after a 1-month delay and was expressed as a shifting preference, from novelty to null to familiarity, across increasing retention time, consistent with Bahrick and colleagues' four-phase model of attention. Together, these findings indicate that infant memory for nonredundantly specified properties of events is a consequence of selective attention to those event properties and is facilitated in unimodal stimulation. Memory for nonredundantly specified properties thus emerges in unimodal stimulation, is later extended to bimodal stimulation, and lasts across a period of at least 1 month.
Enhancing online timeline visualizations with events and images
NASA Astrophysics Data System (ADS)
Pandya, Abhishek; Mulye, Aniket; Teoh, Soon Tee
2011-01-01
The use of timeline to visualize time-series data is one of the most intuitive and commonly used methods, and is used for widely-used applications such as stock market data visualization, and tracking of poll data of election candidates over time. While useful, these timeline visualizations are lacking in contextual information of events which are related or cause changes in the data. We have developed a system that enhances timeline visualization with display of relevant news events and their corresponding images, so that users can not only see the changes in the data, but also understand the reasons behind the changes. We have also conducted a user study to test the effectiveness of our ideas.
Shifting visual perspective during memory retrieval reduces the accuracy of subsequent memories.
Marcotti, Petra; St Jacques, Peggy L
2018-03-01
Memories for events can be retrieved from visual perspectives that were never experienced, reflecting the dynamic and reconstructive nature of memories. Characteristics of memories can be altered when shifting from an own eyes perspective, the way most events are initially experienced, to an observer perspective, in which one sees oneself in the memory. Moreover, recent evidence has linked these retrieval-related effects of visual perspective to subsequent changes in memories. Here we examine how shifting visual perspective influences the accuracy of subsequent memories for complex events encoded in the lab. Participants performed a series of mini-events that were experienced from their own eyes, and were later asked to retrieve memories for these events while maintaining the own eyes perspective or shifting to an alternative observer perspective. We then examined how shifting perspective during retrieval modified memories by influencing the accuracy of recall on a final memory test. Across two experiments, we found that shifting visual perspective reduced the accuracy of subsequent memories and that reductions in vividness when shifting visual perspective during retrieval predicted these changes in the accuracy of memories. Our findings suggest that shifting from an own eyes to an observer perspective influences the accuracy of long-term memories.
Alcohol marketing in televised international football: frequency analysis.
Adams, Jean; Coleman, James; White, Martin
2014-05-20
Alcohol marketing includes sponsorship of individuals, organisations and sporting events. Football (soccer) is one of the most popular spectator sports worldwide. No previous studies have quantified the frequency of alcohol marketing in a high profile international football tournament. The aims were to determine: the frequency and nature of visual references to alcohol in a representative sample of EURO2012 matches broadcast in the UK; and if frequency or nature varied between matches broadcast on public service and commercial channels, or between matches that did and did not feature England. Eight matches selected by stratified random sampling were recorded. All visual references to alcohol were identified using a tool with high inter-rater reliability. 1846 visual references to alcohol were identified over 1487 minutes of broadcast--an average of 1.24 references per minute. The mean number of references per minute was higher in matches that did vs did not feature England (p = 0.004), but did not differ between matches broadcast on public service vs commercial channels (p = 0.92). The frequency of visual references to alcohol was universally high and higher in matches featuring the only UK home team--England--suggesting that there may be targeting of particularly highly viewed matches. References were embedded in broadcasts, and not particular to commercial channels including paid-for advertising. New UK codes-of-conduct on alcohol marketing at sporting events will not reduce the level of marketing reported here.
2016-09-01
is a Windows Presentation Foundation (WPF) control developed using the .NET framework in Microsoft Visual Studio. As a WPF control, it can be used in...any WPF application as a graphical visual element. The purpose of the control is to visually display time-related events as vertical lines on a...available on the control. 15. SUBJECT TERMS Windows Presentation Foundation, WPF, control, C#, .NET framework, Microsoft Visual Studio 16. SECURITY
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Huber, David J.; Martin, Kevin
2017-05-01
This paper† describes a technique in which we improve upon the prior performance of the Rapid Serial Visual Presentation (RSVP) EEG paradigm for image classification though the insertion of visual attention distracters and overall sequence reordering based upon the expected ratio of rare to common "events" in the environment and operational context. Inserting distracter images maintains the ratio of common events to rare events at an ideal level, maximizing the rare event detection via P300 EEG response to the RSVP stimuli. The method has two steps: first, we compute the optimal number of distracters needed for an RSVP stimuli based on the desired sequence length and expected number of targets and insert the distracters into the RSVP sequence, and then we reorder the RSVP sequence to maximize P300 detection. We show that by reducing the ratio of target events to nontarget events using this method, we can allow RSVP sequences with more targets without sacrificing area under the ROC curve (azimuth).
Semantic-based crossmodal processing during visual suppression.
Cox, Dustin; Hong, Sang Wook
2015-01-01
To reveal the mechanisms underpinning the influence of auditory input on visual awareness, we examine, (1) whether purely semantic-based multisensory integration facilitates the access to visual awareness for familiar visual events, and (2) whether crossmodal semantic priming is the mechanism responsible for the semantic auditory influence on visual awareness. Using continuous flash suppression, we rendered dynamic and familiar visual events (e.g., a video clip of an approaching train) inaccessible to visual awareness. We manipulated the semantic auditory context of the videos by concurrently pairing them with a semantically matching soundtrack (congruent audiovisual condition), a semantically non-matching soundtrack (incongruent audiovisual condition), or with no soundtrack (neutral video-only condition). We found that participants identified the suppressed visual events significantly faster (an earlier breakup of suppression) in the congruent audiovisual condition compared to the incongruent audiovisual condition and video-only condition. However, this facilitatory influence of semantic auditory input was only observed when audiovisual stimulation co-occurred. Our results suggest that the enhanced visual processing with a semantically congruent auditory input occurs due to audiovisual crossmodal processing rather than semantic priming, which may occur even when visual information is not available to visual awareness.
Virtual Worlds, Virtual Literacy: An Educational Exploration
ERIC Educational Resources Information Center
Stoerger, Sharon
2008-01-01
Virtual worlds enable students to learn through seeing, knowing, and doing within visually rich and mentally engaging spaces. Rather than reading about events, students become part of the events through the adoption of a pre-set persona. Along with visual feedback that guides the players' activities and the development of visual skills, visual…
Discrete Events as Units of Perceived Time
ERIC Educational Resources Information Center
Liverence, Brandon M.; Scholl, Brian J.
2012-01-01
In visual images, we perceive both space (as a continuous visual medium) and objects (that inhabit space). Similarly, in dynamic visual experience, we perceive both continuous time and discrete events. What is the relationship between these units of experience? The most intuitive answer may be similar to the spatial case: time is perceived as an…
Glaucoma Severity and Participation in Diverse Social Roles: Does Visual Field Loss Matter?
Yang, Yelin; Trope, Graham E; Buys, Yvonne M; Badley, Elizabeth M; Gignac, Monique A M; Shen, Carl; Jin, Ya-Ping
2016-07-01
To assess the association between glaucoma severity and participation in diverse social roles. Cross-sectional survey. Individuals with glaucoma, 50+, with visual acuity in the better eye >20/50 were enrolled. They were classified into 3 groups based on visual field loss in the better eye: mild [mean deviation (MD)>-6 dB], moderate (MD, -6 to -12 dB), and severe (MD<-12 dB). The validated Social Role Participation Questionnaire assessed respondents' perceptions of the importance, difficulty, and satisfaction with participation in 11 social role domains (eg, community events, travel). Differences between groups were examined using multivariate linear regression analyses. A total of 118 participants (52% female) were included: 60 mild, 29 moderate, and 29 severe. All social role domains were rated as important by all participants except for education and employment. Women (P<0.01), those with a partner (P<0.01), and those who were less depressed (P=0.03) reported higher scores of perceived importance of participating in social activities. Compared with those with mild glaucoma, individuals with severe glaucoma reported significantly more difficulty participating in community/religious/cultural events (P<0.01), travelling (P<0.01), and relationships with family members (P=0.01). They also reported less satisfaction with travelling (P=0.01) and social events (P=0.04). Participation in diverse social roles is valued by individuals with glaucoma. Severe visual field loss impedes involvement in and satisfaction with activities in community/religious/cultural events, travelling, and relationships with family members. Appropriate community and targeted interventions are needed to allow people with severe glaucoma to maintain active social participation-a key component to successful aging.
Preliminary study of visual perspective in mental time travel in schizophrenia.
Wang, Ya; Wang, Yi; Zhao, Qing; Cui, Ji-Fang; Hong, Xiao-Hong; Chan, Raymond Ck
2017-10-01
This study explored specificity and visual perspective of mental time travel in schizophrenia. Fifteen patients with schizophrenia and 18 controls were recruited. Participants were asked to recall or imagine specific events according to cue words. Results showed that schizophrenia patients generated fewer specific events than controls, the recalled events were more specific than imagined events. Schizophrenia adopted less field perspective and more observer perspective than controls. These results suggested that patients with schizophrenia were impaired in mental time travel both in specificity and visual perspective. Further studies are needed to identify the underlying mechanisms. Copyright © 2017 Elsevier B.V. All rights reserved.
Timing the impact of literacy on visual processing
Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W.; Cohen, Laurent; Dehaene, Stanislas
2014-01-01
Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼100–150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing. PMID:25422460
Timing the impact of literacy on visual processing.
Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W; Cohen, Laurent; Dehaene, Stanislas
2014-12-09
Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼ 100-150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing.
Chronodes: Interactive Multifocus Exploration of Event Sequences
POLACK, PETER J.; CHEN, SHANG-TSE; KAHNG, MINSUK; DE BARBARO, KAYA; BASOLE, RAHUL; SHARMIN, MOUSHUMI; CHAU, DUEN HORNG
2018-01-01
The advent of mobile health (mHealth) technologies challenges the capabilities of current visualizations, interactive tools, and algorithms. We present Chronodes, an interactive system that unifies data mining and human-centric visualization techniques to support explorative analysis of longitudinal mHealth data. Chronodes extracts and visualizes frequent event sequences that reveal chronological patterns across multiple participant timelines of mHealth data. It then combines novel interaction and visualization techniques to enable multifocus event sequence analysis, which allows health researchers to interactively define, explore, and compare groups of participant behaviors using event sequence combinations. Through summarizing insights gained from a pilot study with 20 behavioral and biomedical health experts, we discuss Chronodes’s efficacy and potential impact in the mHealth domain. Ultimately, we outline important open challenges in mHealth, and offer recommendations and design guidelines for future research. PMID:29515937
A probabilistic model of overt visual attention for cognitive robots.
Begum, Momotaz; Karray, Fakhri; Mann, George K I; Gosine, Raymond G
2010-10-01
Visual attention is one of the major requirements for a robot to serve as a cognitive companion for human. The robotic visual attention is mostly concerned with overt attention which accompanies head and eye movements of a robot. In this case, each movement of the camera head triggers a number of events, namely transformation of the camera and the image coordinate systems, change of content of the visual field, and partial appearance of the stimuli. All of these events contribute to the reduction in probability of meaningful identification of the next focus of attention. These events are specific to overt attention with head movement and, therefore, their effects are not addressed in the classical models of covert visual attention. This paper proposes a Bayesian model as a robot-centric solution for the overt visual attention problem. The proposed model, while taking inspiration from the primates visual attention mechanism, guides a robot to direct its camera toward behaviorally relevant and/or visually demanding stimuli. A particle filter implementation of this model addresses the challenges involved in overt attention with head movement. Experimental results demonstrate the performance of the proposed model.
Visual search of cyclic spatio-temporal events
NASA Astrophysics Data System (ADS)
Gautier, Jacques; Davoine, Paule-Annick; Cunty, Claire
2018-05-01
The analysis of spatio-temporal events, and especially of relationships between their different dimensions (space-time-thematic attributes), can be done with geovisualization interfaces. But few geovisualization tools integrate the cyclic dimension of spatio-temporal event series (natural events or social events). Time Coil and Time Wave diagrams represent both the linear time and the cyclic time. By introducing a cyclic temporal scale, these diagrams may highlight the cyclic characteristics of spatio-temporal events. However, the settable cyclic temporal scales are limited to usual durations like days or months. Because of that, these diagrams cannot be used to visualize cyclic events, which reappear with an unusual period, and don't allow to make a visual search of cyclic events. Also, they don't give the possibility to identify the relationships between the cyclic behavior of the events and their spatial features, and more especially to identify localised cyclic events. The lack of possibilities to represent the cyclic time, outside of the temporal diagram of multi-view geovisualization interfaces, limits the analysis of relationships between the cyclic reappearance of events and their other dimensions. In this paper, we propose a method and a geovisualization tool, based on the extension of Time Coil and Time Wave, to provide a visual search of cyclic events, by allowing to set any possible duration to the diagram's cyclic temporal scale. We also propose a symbology approach to push the representation of the cyclic time into the map, in order to improve the analysis of relationships between space and the cyclic behavior of events.
Event Display for the Visualization of CMS Events
NASA Astrophysics Data System (ADS)
Bauerdick, L. A. T.; Eulisse, G.; Jones, C. D.; Kovalskyi, D.; McCauley, T.; Mrak Tadel, A.; Muelmenstaedt, J.; Osborne, I.; Tadel, M.; Tu, Y.; Yagil, A.
2011-12-01
During the last year the CMS experiment engaged in consolidation of its existing event display programs. The core of the new system is based on the Fireworks event display program which was by-design directly integrated with the CMS Event Data Model (EDM) and the light version of the software framework (FWLite). The Event Visualization Environment (EVE) of the ROOT framework is used to manage a consistent set of 3D and 2D views, selection, user-feedback and user-interaction with the graphics windows; several EVE components were developed by CMS in collaboration with the ROOT project. In event display operation simple plugins are registered into the system to perform conversion from EDM collections into their visual representations which are then managed by the application. Full event navigation and filtering as well as collection-level filtering is supported. The same data-extraction principle can also be applied when Fireworks will eventually operate as a service within the full software framework.
Predictive and postdictive mechanisms jointly contribute to visual awareness.
Soga, Ryosuke; Akaishi, Rei; Sakai, Katsuyuki
2009-09-01
One of the fundamental issues in visual awareness is how we are able to perceive the scene in front of our eyes on time despite the delay in processing visual information. The prediction theory postulates that our visual system predicts the future to compensate for such delays. On the other hand, the postdiction theory postulates that our visual awareness is inevitably a delayed product. In the present study we used flash-lag paradigms in motion and color domains and examined how the perception of visual information at the time of flash is influenced by prior and subsequent visual events. We found that both types of event additively influence the perception of the present visual image, suggesting that our visual awareness results from joint contribution of predictive and postdictive mechanisms.
Alcohol marketing in televised international football: frequency analysis
2014-01-01
Background Alcohol marketing includes sponsorship of individuals, organisations and sporting events. Football (soccer) is one of the most popular spectator sports worldwide. No previous studies have quantified the frequency of alcohol marketing in a high profile international football tournament. The aims were to determine: the frequency and nature of visual references to alcohol in a representative sample of EURO2012 matches broadcast in the UK; and if frequency or nature varied between matches broadcast on public service and commercial channels, or between matches that did and did not feature England. Methods Eight matches selected by stratified random sampling were recorded. All visual references to alcohol were identified using a tool with high inter-rater reliability. Results 1846 visual references to alcohol were identified over 1487 minutes of broadcast - an average of 1.24 references per minute. The mean number of references per minute was higher in matches that did vs did not feature England (p = 0.004), but did not differ between matches broadcast on public service vs commercial channels (p = 0.92). Conclusions The frequency of visual references to alcohol was universally high and higher in matches featuring the only UK home team - England - suggesting that there may be targeting of particularly highly viewed matches. References were embedded in broadcasts, and not particular to commercial channels including paid-for advertising. New UK codes-of-conduct on alcohol marketing at sporting events will not reduce the level of marketing reported here. PMID:24885718
Multilevel analysis of sports video sequences
NASA Astrophysics Data System (ADS)
Han, Jungong; Farin, Dirk; de With, Peter H. N.
2006-01-01
We propose a fully automatic and flexible framework for analysis and summarization of tennis broadcast video sequences, using visual features and specific game-context knowledge. Our framework can analyze a tennis video sequence at three levels, which provides a broad range of different analysis results. The proposed framework includes novel pixel-level and object-level tennis video processing algorithms, such as a moving-player detection taking both the color and the court (playing-field) information into account, and a player-position tracking algorithm based on a 3-D camera model. Additionally, we employ scene-level models for detecting events, like service, base-line rally and net-approach, based on a number real-world visual features. The system can summarize three forms of information: (1) all court-view playing frames in a game, (2) the moving trajectory and real-speed of each player, as well as relative position between the player and the court, (3) the semantic event segments in a game. The proposed framework is flexible in choosing the level of analysis that is desired. It is effective because the framework makes use of several visual cues obtained from the real-world domain to model important events like service, thereby increasing the accuracy of the scene-level analysis. The paper presents attractive experimental results highlighting the system efficiency and analysis capabilities.
Science Squared: Teaching Science Visually.
ERIC Educational Resources Information Center
Paradis, Olga; Savage, Karen; Judice, Michelle
This paper describes a collection of novel ideas for bulletin board displays that would be useful in supplementing science classroom instruction. Information on women and minorities in science; science concepts in everyday activities such as nutrition, baseball, and ice cream-making; and various holidays and celebratory events is included. Each…
Visual form predictions facilitate auditory processing at the N1.
Paris, Tim; Kim, Jeesun; Davis, Chris
2017-02-20
Auditory-visual (AV) events often involve a leading visual cue (e.g. auditory-visual speech) that allows the perceiver to generate predictions about the upcoming auditory event. Electrophysiological evidence suggests that when an auditory event is predicted, processing is sped up, i.e., the N1 component of the ERP occurs earlier (N1 facilitation). However, it is not clear (1) whether N1 facilitation is based specifically on predictive rather than multisensory integration and (2) which particular properties of the visual cue it is based on. The current experiment used artificial AV stimuli in which visual cues predicted but did not co-occur with auditory cues. Visual form cues (high and low salience) and the auditory-visual pairing were manipulated so that auditory predictions could be based on form and timing or on timing only. The results showed that N1 facilitation occurred only for combined form and temporal predictions. These results suggest that faster auditory processing (as indicated by N1 facilitation) is based on predictive processing generated by a visual cue that clearly predicts both what and when the auditory stimulus will occur. Copyright © 2016. Published by Elsevier Ltd.
Characteristics of bursts observed by the SMM Gamma-Ray Spectrometer
NASA Technical Reports Server (NTRS)
Share, G. H.; Messina, D. C.; Iadicicco, A.; Matz, S. M.; Rieger, E.; Forrest, D. J.
1992-01-01
The Gamma Ray Spectrometer (GRS) on the SMM completed close to 10 years of highly successful operation when the spacecraft reentered the atmosphere on December 2, 1989. During this period the GRS detected 177 events above 300 keV which have been classified as cosmic gamma-ray bursts. A catalog of these events is in preparation which will include time profiles and spectra for all events. Visual inspection of the spectra indicates that emission typically extends into the MeV range, without any evidence for a high-energy cutoff; 17 of these events are also observed above 10 MeV. We find no convincing evidence for line-like emission features in any of the time-integrated spectra.
Event Processing in the Visual World: Projected Motion Paths during Spoken Sentence Comprehension
ERIC Educational Resources Information Center
Kamide, Yuki; Lindsay, Shane; Scheepers, Christoph; Kukona, Anuenue
2016-01-01
Motion events in language describe the movement of an entity to another location along a path. In 2 eye-tracking experiments, we found that comprehension of motion events involves the online construction of a spatial mental model that integrates language with the visual world. In Experiment 1, participants listened to sentences describing the…
Agyei, Seth B.; van der Weel, F. R. (Ruud); van der Meer, Audrey L. H.
2016-01-01
During infancy, smart perceptual mechanisms develop allowing infants to judge time-space motion dynamics more efficiently with age and locomotor experience. This emerging capacity may be vital to enable preparedness for upcoming events and to be able to navigate in a changing environment. Little is known about brain changes that support the development of prospective control and about processes, such as preterm birth, that may compromise it. As a function of perception of visual motion, this paper will describe behavioral and brain studies with young infants investigating the development of visual perception for prospective control. By means of the three visual motion paradigms of occlusion, looming, and optic flow, our research shows the importance of including behavioral data when studying the neural correlates of prospective control. PMID:26903908
Neuromorphic audio-visual sensor fusion on a sound-localizing robot.
Chan, Vincent Yue-Sek; Jin, Craig T; van Schaik, André
2012-01-01
This paper presents the first robotic system featuring audio-visual (AV) sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localization through self motion and visual feedback, using an adaptive ITD-based sound localization algorithm. After training, the robot can localize sound sources (white or pink noise) in a reverberant environment with an RMS error of 4-5° in azimuth. We also investigate the AV source binding problem and an experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. Despite the simplicity of this method and a large number of false visual events in the background, a correct match can be made 75% of the time during the experiment.
The differential contributions of visual imagery constructs on autobiographical thinking.
Aydin, Cagla
2018-02-01
There is a growing theoretical and empirical consensus on the central role of visual imagery in autobiographical memory. However, findings from studies that explore how individual differences in visual imagery are reflected on autobiographical thinking do not present a coherent story. One reason for the mixed findings was suggested to be the treatment of visual imagery as an undifferentiated construct while evidence shows that there is more than one type of visual imagery. The present study investigates the relative contributions of different imagery constructs; namely, object and spatial imagery, on autobiographical memory processes. Additionally, it explores whether a similar relation extends to imagining the future. The results indicate that while object imagery was significantly correlated with several phenomenological characteristics, such as the level of sensory and perceptual details for past events - but not for future events - spatial imagery predicted the level of episodic specificity for both past and future events. We interpret these findings as object imagery being recruited in tasks of autobiographical memory that employ reflective processes while spatial imagery is engaged during direct retrieval of event details. Implications for the role of visual imagery in autobiographical thinking processes are discussed.
Multiple Transient Signals in Human Visual Cortex Associated with an Elementary Decision
Nolte, Guido
2017-01-01
The cerebral cortex continuously undergoes changes in its state, which are manifested in transient modulations of the cortical power spectrum. Cortical state changes also occur at full wakefulness and during rapid cognitive acts, such as perceptual decisions. Previous studies found a global modulation of beta-band (12–30 Hz) activity in human and monkey visual cortex during an elementary visual decision: reporting the appearance or disappearance of salient visual targets surrounded by a distractor. The previous studies disentangled neither the motor action associated with behavioral report nor other secondary processes, such as arousal, from perceptual decision processing per se. Here, we used magnetoencephalography in humans to pinpoint the factors underlying the beta-band modulation. We found that disappearances of a salient target were associated with beta-band suppression, and target reappearances with beta-band enhancement. This was true for both overt behavioral reports (immediate button presses) and silent counting of the perceptual events. This finding indicates that the beta-band modulation was unrelated to the execution of the motor act associated with a behavioral report of the perceptual decision. Further, changes in pupil-linked arousal, fixational eye movements, or gamma-band responses were not necessary for the beta-band modulation. Together, our results suggest that the beta-band modulation was a top-down signal associated with the process of converting graded perceptual signals into a categorical format underlying flexible behavior. This signal may have been fed back from brain regions involved in decision processing to visual cortex, thus enforcing a “decision-consistent” cortical state. SIGNIFICANCE STATEMENT Elementary visual decisions are associated with a rapid state change in visual cortex, indexed by a modulation of neural activity in the beta-frequency range. Such decisions are also followed by other events that might affect the state of visual cortex, including the motor command associated with the report of the decision, an increase in pupil-linked arousal, fixational eye movements, and fluctuations in bottom-up sensory processing. Here, we ruled out the necessity of these events for the beta-band modulation of visual cortex. We propose that the modulation reflects a decision-related state change, which is induced by the conversion of graded perceptual signals into a categorical format underlying behavior. The resulting decision signal may be fed back to visual cortex. PMID:28495972
A Catalog of Coronal "EIT Wave" Transients
NASA Technical Reports Server (NTRS)
Thompson, B. J.; Myers, D. C.
2005-01-01
SOHO Extreme Ultraviolet Imaging Telescope (EIT) data have been visually searched for coronal "EIT wave" transients over the period beginning 24 March 1997 extending through 24 June 1998. The dates covered start at the beginning of regular high-cadence (more than 1 image every 20 minutes) observations, ending at the 4-month interruption of SOHO observations in mid-1998. 176 events are included in this catalog. The observations range from "candidate" events, which were either weak or had insufficient data coverage, to events which were well-defined and were clearly distinguishable in the data. Included in the catalog are times of the EIT images in which the events are observed, diagrams indicating the observed locations of the wavefronts and associated active regions, and the speeds of the wavefronts. The measured speeds of the wavefronts varied from less than 50 to over 700 km/sec with "typical" speeds of 200-400 Msec.
NASA Astrophysics Data System (ADS)
Tokuoka, Nobuyuki; Miyoshi, Hitoshi; Kusano, Hideaki; Hata, Hidehiro; Hiroe, Tetsuyuki; Fujiwara, Kazuhito; Yasushi, Kondo
2008-11-01
Visualization of explosion phenomena is very important and essential to evaluate the performance of explosive effects. The phenomena, however, generate blast waves and fragments from cases. We must protect our visualizing equipment from any form of impact. In the tests described here, the front lens was separated from the camera head by means of a fiber-optic cable in order to be able to use the camera, a Shimadzu Hypervision HPV-1, for tests in severe blast environment, including the filming of explosions. It was possible to obtain clear images of the explosion that were not inferior to the images taken by the camera with the lens directly coupled to the camera head. It could be confirmed that this system is very useful for the visualization of dangerous events, e.g., at an explosion site, and for visualizations at angles that would be unachievable under normal circumstances.
NASA Technical Reports Server (NTRS)
Pomerantz, M. I.; Lim, C.; Myint, S.; Woodward, G.; Balaram, J.; Kuo, C.
2012-01-01
he Jet Propulsion Laboratory's Entry, Descent and Landing (EDL) Reconstruction Task has developed a software system that provides mission operations personnel and analysts with a real time telemetry-based live display, playback and post-EDL reconstruction capability that leverages the existing high-fidelity, physics-based simulation framework and modern game engine-derived 3D visualization system developed in the JPL Dynamics and Real Time Simulation (DARTS) Lab. Developed as a multi-mission solution, the EDL Telemetry Visualization (ETV) system has been used for a variety of projects including NASA's Mars Science Laboratory (MSL), NASA'S Low Density Supersonic Decelerator (LDSD) and JPL's MoonRise Lunar sample return proposal.
Manananggal - a novel viewer for alternative splicing events.
Barann, Matthias; Zimmer, Ralf; Birzele, Fabian
2017-02-21
Alternative splicing is an important cellular mechanism that can be analyzed by RNA sequencing. However, identification of splicing events in an automated fashion is error-prone. Thus, further validation is required to select reliable instances of alternative splicing events (ASEs). There are only few tools specifically designed for interactive inspection of ASEs and available visualization approaches can be significantly improved. Here, we present Manananggal, an application specifically designed for the identification of splicing events in next generation sequencing data. Manananggal includes a web application for visual inspection and a command line tool that allows for ASE detection. We compare the sashimi plots available in the IGV Viewer, the DEXSeq splicing plots and SpliceSeq to the Manananggal interface and discuss the advantages and drawbacks of these tools. We show that sashimi plots (such as those used by the IGV Viewer and SpliceSeq) offer a practical solution for simple ASEs, but also indicate short-comings for highly complex genes. Manananggal is an interactive web application that offers functions specifically tailored to the identification of alternative splicing events that other tools are lacking. The ability to select a subset of isoforms allows an easier interpretation of complex alternative splicing events. In contrast to SpliceSeq and the DEXSeq splicing plot, Manananggal does not obscure the gene structure by showing full transcript models that makes it easier to determine which isoforms are expressed and which are not.
Teramoto, Wataru; Watanabe, Hiroshi; Umemura, Hiroyuki
2008-01-01
The perceived temporal order of external successive events does not always follow their physical temporal order. We examined the contribution of self-motion mechanisms in the perception of temporal order in the auditory modality. We measured perceptual biases in the judgment of the temporal order of two short sounds presented successively, while participants experienced visually induced self-motion (yaw-axis circular vection) elicited by viewing long-lasting large-field visual motion. In experiment 1, a pair of white-noise patterns was presented to participants at various stimulus-onset asynchronies through headphones, while they experienced visually induced self-motion. Perceived temporal order of auditory events was modulated by the direction of the visual motion (or self-motion). Specifically, the sound presented to the ear in the direction opposite to the visual motion (ie heading direction) was perceived prior to the sound presented to the ear in the same direction. Experiments 2A and 2B were designed to reduce the contributions of decisional and/or response processes. In experiment 2A, the directional cueing of the background (left or right) and the response dimension (high pitch or low pitch) were not spatially associated. In experiment 2B, participants were additionally asked to report which of the two sounds was perceived 'second'. Almost the same results as in experiment 1 were observed, suggesting that the change in temporal order of auditory events during large-field visual motion reflects a change in perceptual processing. Experiment 3 showed that the biases in the temporal-order judgments of auditory events were caused by concurrent actual self-motion with a rotatory chair. In experiment 4, using a small display, we showed that 'pure' long exposure to visual motion without the sensation of self-motion was not responsible for this phenomenon. These results are consistent with previous studies reporting a change in the perceived temporal order of visual or tactile events depending on the direction of self-motion. Hence, large-field induced (ie optic flow) self-motion can affect the temporal order of successive external events across various modalities.
Progressive Visual Analytics: User-Driven Visual Exploration of In-Progress Analytics.
Stolper, Charles D; Perer, Adam; Gotz, David
2014-12-01
As datasets grow and analytic algorithms become more complex, the typical workflow of analysts launching an analytic, waiting for it to complete, inspecting the results, and then re-Iaunching the computation with adjusted parameters is not realistic for many real-world tasks. This paper presents an alternative workflow, progressive visual analytics, which enables an analyst to inspect partial results of an algorithm as they become available and interact with the algorithm to prioritize subspaces of interest. Progressive visual analytics depends on adapting analytical algorithms to produce meaningful partial results and enable analyst intervention without sacrificing computational speed. The paradigm also depends on adapting information visualization techniques to incorporate the constantly refining results without overwhelming analysts and provide interactions to support an analyst directing the analytic. The contributions of this paper include: a description of the progressive visual analytics paradigm; design goals for both the algorithms and visualizations in progressive visual analytics systems; an example progressive visual analytics system (Progressive Insights) for analyzing common patterns in a collection of event sequences; and an evaluation of Progressive Insights and the progressive visual analytics paradigm by clinical researchers analyzing electronic medical records.
NASA Astrophysics Data System (ADS)
Ryazanova, A. A.; Okladnikov, I. G.; Gordov, E. P.
2017-11-01
The frequency of occurrence and magnitude of precipitation and temperature extreme events show positive trends in several geographical regions. These events must be analyzed and studied in order to better understand their impact on the environment, predict their occurrences, and mitigate their effects. For this purpose, we augmented web-GIS called “CLIMATE” to include a dedicated statistical package developed in the R language. The web-GIS “CLIMATE” is a software platform for cloud storage processing and visualization of distributed archives of spatial datasets. It is based on a combined use of web and GIS technologies with reliable procedures for searching, extracting, processing, and visualizing the spatial data archives. The system provides a set of thematic online tools for the complex analysis of current and future climate changes and their effects on the environment. The package includes new powerful methods of time-dependent statistics of extremes, quantile regression and copula approach for the detailed analysis of various climate extreme events. Specifically, the very promising copula approach allows obtaining the structural connections between the extremes and the various environmental characteristics. The new statistical methods integrated into the web-GIS “CLIMATE” can significantly facilitate and accelerate the complex analysis of climate extremes using only a desktop PC connected to the Internet.
Escape from harm: linking affective vision and motor responses during active avoidance
Keil, Andreas
2014-01-01
When organisms confront unpleasant objects in their natural environments, they engage in behaviors that allow them to avoid aversive outcomes. Here, we linked visual processing of threat to its behavioral consequences by including a motor response that terminated exposure to an aversive event. Dense-array steady-state visual evoked potentials were recorded in response to conditioned threat and safety signals viewed in active or passive behavioral contexts. The amplitude of neuronal responses in visual cortex increased additively, as a function of emotional value and action relevance. The gain in local cortical population activity for threat relative to safety cues persisted when aversive reinforcement was behaviorally terminated, suggesting a lingering emotionally based response amplification within the visual system. Distinct patterns of long-range neural synchrony emerged between the visual cortex and extravisual regions. Increased coupling between visual and higher-order structures was observed specifically during active perception of threat, consistent with a reorganization of neuronal populations involved in linking sensory processing to action preparation. PMID:24493849
Liu, Baolin; Wu, Guangning; Wang, Zhongning; Ji, Xiang
2011-07-01
In the real world, some of the auditory and visual information received by the human brain are temporally asynchronous. How is such information integrated in cognitive processing in the brain? In this paper, we aimed to study the semantic integration of differently asynchronous audio-visual information in cognitive processing using ERP (event-related potential) method. Subjects were presented with videos of real world events, in which the auditory and visual information are temporally asynchronous. When the critical action was prior to the sound, sounds incongruous with the preceding critical actions elicited a N400 effect when compared to congruous condition. This result demonstrates that semantic contextual integration indexed by N400 also applies to cognitive processing of multisensory information. In addition, the N400 effect is early in latency when contrasted with other visually induced N400 studies. It is shown that cross modal information is facilitated in time when contrasted with visual information in isolation. When the sound was prior to the critical action, a larger late positive wave was observed under the incongruous condition compared to congruous condition. P600 might represent a reanalysis process, in which the mismatch between the critical action and the preceding sound was evaluated. It is shown that environmental sound may affect the cognitive processing of a visual event. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Visual tracking using neuromorphic asynchronous event-based cameras.
Ni, Zhenjiang; Ieng, Sio-Hoi; Posch, Christoph; Régnier, Stéphane; Benosman, Ryad
2015-04-01
This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly.
SPA Meteor Section Results: 2006
NASA Astrophysics Data System (ADS)
McBeath, Alastair
2010-12-01
A summary of the main analyzed results and other information provided to the SPA Meteor Section from 2006 is presented and discussed. Events covered include: the radio Quadrantid maximum on January 3/4; an impressive fireball seen from parts of England, Belgium and the Netherlands at 22h53m51s UT on July 18, which was imaged from three EFN stations as well; the Southern delta-Aquarid and alpha-Capricornid activity from late July and early August; the radio Perseid maxima on August 12/13; confirmation that the October 5/6 video-meteor outburst was not observed by radio; visual and radio findings from the strong, bright-meteor, Orionid return in October; another impressive UK-observed fireball on November 1/2, with an oil painting of the event as seen from London; the Leonids, which produced a strong visual maximum around 04h-05h UT on November 18/19 that was recorded much less clearly by radio; radio and visual reports from the Geminids, with a note regarding NASA-observed Geminid lunar impact flashes; and the Ursid outburst recorded by various techniques on December 22.
Inage, Kazuhide; Orita, Sumihisa; Yamauchi, Kazuyo; Suzuki, Takane; Suzuki, Miyako; Sakuma, Yoshihiro; Kubota, Go; Oikawa, Yasuhiro; Sainoh, Takeshi; Sato, Jun; Fujimoto, Kazuki; Shiga, Yasuhiro; Abe, Koki; Kanamoto, Hirohito; Inoue, Masahiro; Kinoshita, Hideyuki; Takahashi, Kazuhisa; Ohtori, Seiji
2016-08-01
Retrospective study. To determine whether low-dose tramadol plus non-steroidal anti-inflammatory drug combination therapy could prevent the transition of acute low back pain to chronic low back pain. Inadequately treated early low back pain transitions to chronic low back pain occur in approximately 30% of affected individuals. The administration of non-steroidal anti-inflammatory drugs is effective for treatment of low back pain in the early stages. However, the treatment of low back pain that is resistant to non-steroidal anti-inflammatory drugs is challenging. Patients who presented with acute low back pain at our hospital were considered for inclusion in this study. After the diagnosis of acute low back pain, non-steroidal anti-inflammatory drug administration was started. Forty patients with a visual analog scale score of >5 for low back pain 1 month after treatment were finally enrolled. The first 20 patients were included in a non-steroidal anti-inflammatory drug group, and they continued non-steroidal anti-inflammatory drug therapy for 1 month. The next 20 patients were included in a combination group, and they received low-dose tramadol plus non-steroidal anti-inflammatory drug combination therapy for 1 month. The incidence of adverse events and the improvement in the visual analog scale score at 2 months after the start of treatment were analyzed. No adverse events were observed in the non-steroidal anti-inflammatory drug group. In the combination group, administration was discontinued in 2 patients (10%) due to adverse events immediately following the start of tramadol administration. At 2 months, the improvement in the visual analog scale score was greater in the combination group than in the non-steroidal anti-inflammatory drug group (p<0.001). Low-dose tramadol plus non-steroidal anti-inflammatory drug combination therapy might decrease the incidence of adverse events and prevent the transition of acute low back pain to chronic low back pain.
High-speed multi-frame laser Schlieren for visualization of explosive events
NASA Astrophysics Data System (ADS)
Clarke, S. A.; Murphy, M. J.; Landon, C. D.; Mason, T. A.; Adrian, R. J.; Akinci, A. A.; Martinez, M. E.; Thomas, K. A.
2007-09-01
High-Speed Multi-Frame Laser Schlieren is used for visualization of a range of explosive and non-explosive events. Schlieren is a well-known technique for visualizing shock phenomena in transparent media. Laser backlighting and a framing camera allow for Schlieren images with very short (down to 5 ns) exposure times, band pass filtering to block out explosive self-light, and 14 frames of a single explosive event. This diagnostic has been applied to several explosive initiation events, such as exploding bridgewires (EBW), Exploding Foil Initiators (EFI) (or slappers), Direct Optical Initiation (DOI), and ElectroStatic Discharge (ESD). Additionally, a series of tests have been performed on "cut-back" detonators with varying initial pressing (IP) heights. We have also used this Diagnostic to visualize a range of EBW, EFI, and DOI full-up detonators. The setup has also been used to visualize a range of other explosive events, such as explosively driven metal shock experiments and explosively driven microjets. Future applications to other explosive events such as boosters and IHE booster evaluation will be discussed. Finite element codes (EPIC, CTH) have been used to analyze the schlieren images to determine likely boundary or initial conditions to determine the temporal-spatial pressure profile across the output face of the detonator. These experiments are part of a phased plan to understand the evolution of detonation in a detonator from initiation shock through run to detonation to full detonation to transition to booster and booster detonation.
Biasing spatial attention with semantic information: an event coding approach.
Amer, Tarek; Gozli, Davood G; Pratt, Jay
2017-04-21
We investigated the influence of conceptual processing on visual attention from the standpoint of Theory of Event Coding (TEC). The theory makes two predictions: first, an important factor in determining the influence of event 1 on processing event 2 is whether features of event 1 are bound into a unified representation (i.e., selection or retrieval of event 1). Second, whether processing the two events facilitates or interferes with each other should depend on the extent to which their constituent features overlap. In two experiments, participants performed a visual-attention cueing task, in which the visual target (event 2) was preceded by a relevant or irrelevant explicit (e.g., "UP") or implicit (e.g., "HAPPY") spatial-conceptual cue (event 1). Consistent with TEC, we found relevant explicit cues (which featurally overlap to a greater extent with the target) and implicit cues (which featurally overlap to a lesser extent), respectively, facilitated and interfered with target processing at compatible locations. Irrelevant explicit and implicit cues, on the other hand, both facilitated target processing, presumably because they were less likely selected or retrieved as an integrated and unified event file. We argue that such effects, often described as "attentional cueing", are better accounted for within the event coding framework.
NASA Astrophysics Data System (ADS)
Bianchi, R. M.; Boudreau, J.; Konstantinidis, N.; Martyniuk, A. C.; Moyse, E.; Thomas, J.; Waugh, B. M.; Yallup, D. P.; ATLAS Collaboration
2017-10-01
At the beginning, HEP experiments made use of photographical images both to record and store experimental data and to illustrate their findings. Then the experiments evolved and needed to find ways to visualize their data. With the availability of computer graphics, software packages to display event data and the detector geometry started to be developed. Here, an overview of the usage of event display tools in HEP is presented. Then the case of the ATLAS experiment is considered in more detail and two widely used event display packages are presented, Atlantis and VP1, focusing on the software technologies they employ, as well as their strengths, differences and their usage in the experiment: from physics analysis to detector development, and from online monitoring to outreach and communication. Towards the end, the other ATLAS visualization tools will be briefly presented as well. Future development plans and improvements in the ATLAS event display packages will also be discussed.
Conveying Global Circulation Patterns in HDTV
NASA Astrophysics Data System (ADS)
Gardiner, N.; Janowiak, J.; Kinzler, R.; Trakinski, V.
2006-12-01
The American Museum of Natural History has partnered with the National Centers for Environmental Prediction (NCEP) to educate general audiences about weather and climate using high definition video broadcasts built from half-hourly global mosaics of infrared (IR) data from five geostationary satellites. The dataset being featured was developed by NCEP to improve precipitation estimates from microwave data that have finer spatial resolution but poorer temporal coverage. The IR data span +/-60 degrees latitude and show circulation patterns at sufficient resolution to teach informal science center visitors about both weather and climate events and concepts. Design and editorial principles for this media program have been guided by lessons learned from production and annual updates of visualizations that cover eight themes in both biological and Earth system sciences. Two formative evaluations on two dates, including interviews and written surveys of 480 museum visitors ranging in age from 13 to over 60, helped refine the design and implementation of the weather and climate program and demonstrated that viewers understood the program's initial literacy objectives, including: (1) conveying the passage of time and currency of visualized data; (2) geographic relationships inherent to atmospheric circulation patterns; and (3) the authenticity of visualized data, i.e., their origin from earth-orbiting satellites. Surveys also indicated an interest and willingness to learn more about weather and climate principles and events. Expanded literacy goals guide ongoing, biweekly production and distribution of global cloud visualization pieces that reach combined audiences of approximately 10 million. Two more rounds of evaluation are planned over the next two years to assess the effectiveness of the media program in addressing these expanded literacy goals.
The Allocation of Visual Attention in Multimedia Search Interfaces
ERIC Educational Resources Information Center
Hughes, Edith Allen
2017-01-01
Multimedia analysts are challenged by the massive numbers of unconstrained video clips generated daily. Such clips can include any possible scene and events, and generally have limited quality control. Analysts who must work with such data are overwhelmed by its volume and lack of computational tools to probe it effectively. Even with advances…
Kennedy Space Center ITC-1 Internship Overview
NASA Technical Reports Server (NTRS)
Ni, Marcus
2011-01-01
As an intern for Priscilla Elfrey in the ITC-1 department, I was involved in many activities that have helped me to develop many new skills. I supported four different projects during my internship, which included the Center for Life Cycle Design (CfLCD), SISO Space Interoperability Smackdown, RTI Teacher Mentor Program, and the Discrete Event Simulation Integrated Visualization Environment Team (DIVE). I provided the CfLCD with web based research on cyber security initiatives involving simulation, education for young children, cloud computing, Otronicon, and Science, Technology, Engineering, and Mathematics (STEM) education initiatives. I also attended STEM meetings regarding simulation courses, and educational course enhancements. To further improve the SISO Simulation event, I provided observation feedback to the technical advisory board. I also helped to set up a chat federation for HLA. The third project involved the RTI Teacher Mentor program, which I helped to organize. Last, but not least, I worked with the DIVE team to develop new software to help visualize discrete event simulations. All of these projects have provided experience on an interdisciplinary level ranging from speech and communication to solving complex problems using math and science.
Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong
2013-01-01
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.
Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong
2013-01-01
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides. PMID:23799097
Homman-Ludiye, Jihane; Bourne, James A.
2014-01-01
The integration of the visual stimulus takes place at the level of the neocortex, organized in anatomically distinct and functionally unique areas. Primates, including humans, are heavily dependent on vision, with approximately 50% of their neocortical surface dedicated to visual processing and possess many more visual areas than any other mammal, making them the model of choice to study visual cortical arealisation. However, in order to identify the mechanisms responsible for patterning the developing neocortex, specifying area identity as well as elucidate events that have enabled the evolution of the complex primate visual cortex, it is essential to gain access to the cortical maps of alternative species. To this end, species including the mouse have driven the identification of cellular markers, which possess an area-specific expression profile, the development of new tools to label connections and technological advance in imaging techniques enabling monitoring of cortical activity in a behaving animal. In this review we present non-primate species that have contributed to elucidating the evolution and development of the visual cortex. We describe the current understanding of the mechanisms supporting the establishment of areal borders during development, mainly gained in the mouse thanks to the availability of genetically modified lines but also the limitations of the mouse model and the need for alternate species. PMID:25071460
Experiences with hypercube operating system instrumentation
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Rudolph, David C.
1989-01-01
The difficulties in conceptualizing the interactions among a large number of processors make it difficult both to identify the sources of inefficiencies and to determine how a parallel program could be made more efficient. This paper describes an instrumentation system that can trace the execution of distributed memory parallel programs by recording the occurrence of parallel program events. The resulting event traces can be used to compile summary statistics that provide a global view of program performance. In addition, visualization tools permit the graphic display of event traces. Visual presentation of performance data is particularly useful, indeed, necessary for large-scale parallel computers; the enormous volume of performance data mandates visual display.
Weber-Fox, Christine; Hart, Laura J; Spruill, John E
2006-07-01
This study examined how school-aged children process different grammatical categories. Event-related brain potentials elicited by words in visually presented sentences were analyzed according to seven grammatical categories with naturally varying characteristics of linguistic functions, semantic features, and quantitative attributes of length and frequency. The categories included nouns, adjectives, verbs, pronouns, conjunctions, prepositions, and articles. The findings indicate that by the age of 9-10 years, children exhibit robust neural indicators differentiating grammatical categories; however, it is also evident that development of language processing is not yet adult-like at this age. The current findings are consistent with the hypothesis that for beginning readers a variety of cues and characteristics interact to affect processing of different grammatical categories and indicate the need to take into account linguistic functions, prosodic salience, and grammatical complexity as they relate to the development of language abilities.
Web-Based Interface for Command and Control of Network Sensors
NASA Technical Reports Server (NTRS)
Wallick, Michael N.; Doubleday, Joshua R.; Shams, Khawaja S.
2010-01-01
This software allows for the visualization and control of a network of sensors through a Web browser interface. It is currently being deployed for a network of sensors monitoring Mt. Saint Helen s volcano; however, this innovation is generic enough that it can be deployed for any type of sensor Web. From this interface, the user is able to fully control and monitor the sensor Web. This includes, but is not limited to, sending "test" commands to individual sensors in the network, monitoring for real-world events, and reacting to those events
Bioinformatics Analysis of Protein Phosphorylation in Plant Systems Biology Using P3DB.
Yao, Qiuming; Xu, Dong
2017-01-01
Protein phosphorylation is one of the most pervasive protein post-translational modification events in plant cells. It is involved in many plant biological processes, such as plant growth, organ development, and plant immunology, by regulating or switching signaling and metabolic pathways. High-throughput experimental methods like mass spectrometry can easily characterize hundreds to thousands of phosphorylation events in a single experiment. With the increasing volume of the data sets, Plant Protein Phosphorylation DataBase (P3DB, http://p3db.org ) provides a comprehensive, systematic, and interactive online platform to deposit, query, analyze, and visualize these phosphorylation events in many plant species. It stores the protein phosphorylation sites in the context of identified mass spectra, phosphopeptides, and phosphoproteins contributed from various plant proteome studies. In addition, P3DB associates these plant phosphorylation sites to protein physicochemical information in the protein charts and tertiary structures, while various protein annotations from hierarchical kinase phosphatase families, protein domains, and gene ontology are also added into the database. P3DB not only provides rich information, but also interconnects and provides visualization of the data in networks, in systems biology context. Currently, P3DB includes the KiC (Kinase Client) assay network, the protein-protein interaction network, the kinase-substrate network, the phosphatase-substrate network, and the protein domain co-occurrence network. All of these are available to query for and visualize existing phosphorylation events. Although P3DB only hosts experimentally identified phosphorylation data, it provides a plant phosphorylation prediction model for any unknown queries on the fly. P3DB is an entry point to the plant phosphorylation community to deposit and visualize any customized data sets within this systems biology framework. Nowadays, P3DB has become one of the major bioinformatics platforms of protein phosphorylation in plant biology.
Jalali, Subhadra; Balakrishnan, Divya; Zeynalova, Zarifa; Padhi, Tapas Ranjan; Rani, Padmaja Kumari
2013-07-01
To report serious adverse events and long-term outcomes of initial experience with intraocular bevacizumab in retinopathy of prematurity (ROP). Consecutive vascularly active ROP cases treated with bevacizumab, in addition to laser and surgery, were analysed retrospectively from a prospective computerised ROP database. Primary efficacy outcome was regression of new vessels. Secondary outcomes included the anatomic and visual status. Serious systemic and ocular adverse events were documented. 24 ROP eyes in 13 babies, received single intraocular bevacizumab for severe stage 3 plus after failed laser (seven eyes), stage 4A plus (eight eyes), and stage 4B/5 plus (nine eyes). Drug was injected intravitreally in 23 eyes and intracamerally in one eye. New vessels regressed in all eyes. Vision salvage in 14 of 24 eyes and no serious neurodevelopmental abnormalities were noted up to 60 months (mean 30.7 months) follow-up. Complications included macular hole and retinal breaks causing rhegmatogenous retinal detachment (one eye); bilateral, progressive vascular attenuation, perivascular exudation and optic atrophy in one baby, and progression of detachment bilaterally to stage 5 in one baby with missed follow-up. One baby who received intracameral injection developed hepatic dysfunction. One eye of this baby also showed a large choroidal rupture. Though intraocular bevacizumab, along with laser and surgery salvaged vision in many otherwise progressive cases of ROP, vigilance and reporting of serious adverse events is essential for future rationalised use of the drug. We report one systemic and four ocular adverse events that require consideration in future use of the drug.
Visualization of Spatio-Temporal Relations in Movement Event Using Multi-View
NASA Astrophysics Data System (ADS)
Zheng, K.; Gu, D.; Fang, F.; Wang, Y.; Liu, H.; Zhao, W.; Zhang, M.; Li, Q.
2017-09-01
Spatio-temporal relations among movement events extracted from temporally varying trajectory data can provide useful information about the evolution of individual or collective movers, as well as their interactions with their spatial and temporal contexts. However, the pure statistical tools commonly used by analysts pose many difficulties, due to the large number of attributes embedded in multi-scale and multi-semantic trajectory data. The need for models that operate at multiple scales to search for relations at different locations within time and space, as well as intuitively interpret what these relations mean, also presents challenges. Since analysts do not know where or when these relevant spatio-temporal relations might emerge, these models must compute statistical summaries of multiple attributes at different granularities. In this paper, we propose a multi-view approach to visualize the spatio-temporal relations among movement events. We describe a method for visualizing movement events and spatio-temporal relations that uses multiple displays. A visual interface is presented, and the user can interactively select or filter spatial and temporal extents to guide the knowledge discovery process. We also demonstrate how this approach can help analysts to derive and explain the spatio-temporal relations of movement events from taxi trajectory data.
Efficient Prediction of Low-Visibility Events at Airports Using Machine-Learning Regression
NASA Astrophysics Data System (ADS)
Cornejo-Bueno, L.; Casanova-Mateo, C.; Sanz-Justo, J.; Cerro-Prada, E.; Salcedo-Sanz, S.
2017-11-01
We address the prediction of low-visibility events at airports using machine-learning regression. The proposed model successfully forecasts low-visibility events in terms of the runway visual range at the airport, with the use of support-vector regression, neural networks (multi-layer perceptrons and extreme-learning machines) and Gaussian-process algorithms. We assess the performance of these algorithms based on real data collected at the Valladolid airport, Spain. We also propose a study of the atmospheric variables measured at a nearby tower related to low-visibility atmospheric conditions, since they are considered as the inputs of the different regressors. A pre-processing procedure of these input variables with wavelet transforms is also described. The results show that the proposed machine-learning algorithms are able to predict low-visibility events well. The Gaussian process is the best algorithm among those analyzed, obtaining over 98% of the correct classification rate in low-visibility events when the runway visual range is {>}1000 m, and about 80% under this threshold. The performance of all the machine-learning algorithms tested is clearly affected in extreme low-visibility conditions ({<}500 m). However, we show improved results of all the methods when data from a neighbouring meteorological tower are included, and also with a pre-processing scheme using a wavelet transform. Also presented are results of the algorithm performance in daytime and nighttime conditions, and for different prediction time horizons.
Determinants of structural choice in visually situated sentence production.
Myachykov, Andriy; Garrod, Simon; Scheepers, Christoph
2012-11-01
Three experiments investigated how perceptual, structural, and lexical cues affect structural choices during English transitive sentence production. Participants described transitive events under combinations of visual cueing of attention (toward either agent or patient) and structural priming with and without semantic match between the notional verb in the prime and the target event. Speakers had a stronger preference for passive-voice sentences (1) when their attention was directed to the patient, (2) upon reading a passive-voice prime, and (3) when the verb in the prime matched the target event. The verb-match effect was the by-product of an interaction between visual cueing and verb match: the increase in the proportion of passive-voice responses with matching verbs was limited to the agent-cued condition. Persistence of visual cueing effects in the presence of both structural and lexical cues suggests a strong coupling between referent-directed visual attention and Subject assignment in a spoken sentence. Copyright © 2012 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Weber-Fox, Christine; Hart, Laura J.; Spruill, John E., III
2006-01-01
This study examined how school-aged children process different grammatical categories. Event-related brain potentials elicited by words in visually presented sentences were analyzed according to seven grammatical categories with naturally varying characteristics of linguistic functions, semantic features, and quantitative attributes of length and…
IRIS DMC products help explore the Tohoku earthquake
NASA Astrophysics Data System (ADS)
Trabant, C.; Hutko, A. R.; Bahavar, M.; Ahern, T. K.; Benson, R. B.; Casey, R.
2011-12-01
Within two hours after the great March 11, 2011 Tohoku earthquake the IRIS DMC started publishing automated data products through its Searchable Product Depository (SPUD), which provides quick viewing of many aspects of the data and preliminary analysis of this great earthquake. These products are part of the DMC's data product development effort intended to serve many purposes: stepping-stones for future research projects, data visualizations, data characterization, research result comparisons as well as outreach material. Our current and soon-to-be-released products that allow users to explore this and other global M>6.0 events include 1) Event Plots, which are a suite of maps, record sections, regional vespagrams and P-coda stacks 2) US Array Ground Motion Visualizations that show the vertical and horizontal global seismic wavefield sweeping across US Array including minor and major arc surface waves and their polarizations 3) back-projection movies that show the time history of short-period energy from the rupture 4) R1 source-time functions that show approximate duration and source directivity and 5) aftershock sequence maps and statistics movies based on NEIC alerts that self-update every hour in the first few days following the mainshock. Higher order information for the Tohoku event that can be inferred based on our products which will be highlighted include a rupture duration of order 150 sec (P-coda stacks, back-projections, R1 STFs) that ruptured approximately 400 km along strike primarily towards the south (back-projections, R1 STFs, aftershock animation) with a very low rupture velocity (back-projections, R1 STFs). All of our event-based products are automated and consistently produced shortly after the event so that they may serve as familiar baselines for the seismology research community. More details on these and other existing products are available at: http://www.iris.edu/dms/products/
Towards a Systematic Search for Triggered Seismic Events in the USA
NASA Astrophysics Data System (ADS)
Tang, V.; Chao, K.; Van der Lee, S.
2017-12-01
Dynamic triggering of small earthquakes and tectonic tremor by small stress variations associated with passing surface waves from large-magnitude teleseismic earthquakes have been observed in seismically active regions in the western US. Local stress variations as small as 5 10 kPa can suffice to advance slip on local faults. Observations of such triggered events share certain distinct characteristics. With an eye towards an eventual application of machine learning, we began a systematic search for dynamically triggered seismic events in the USA that have these characteristics. Such a systematic survey has the potential to help us to better understand the fundamental process of dynamic triggering and hazards implied by it. Using visual inspection on top of timing and frequency based selection criteria for these seismic phenomena, our search yielded numerous false positives, indicating the challenge posed by moving from ad-hoc observations of dynamic triggering to a systematic search that also includes a catalog of non-triggering, even when sufficient stress variations are supplied. Our search includes a dozen large earthquakes that occurred during the tenure of USArray. One of these earthquakes (11 April 2012 Mw8.6 Sumatra), for example, was observed by USArray-TA stations in the Midwest and other station networks (such as PB and UW), and yielded candidate-triggered events at 413 stations. We kept 79 of these observations after closer visual inspection of the observed events suggested distinct P and S arrivals from a local earthquake, or a tremor modulation with the same period as the surface wave, among other criteria. We confirmed triggered seismic events in 63 stations along the western plate boundary where triggered events have previously been observed. We also newly found triggered tremor sources in eastern Oregon and Yellowstone, and candidate-triggered earthquake sources in New Mexico and Minnesota. Learning whether 14 of remaining candidates are confirmed as triggered events or not will provide constraints on the state of intraplate stress in the USA. Learning what it takes to discriminate between triggered events and false positives will be important for future monitoring practices.
2014-01-01
Background Neurofibromatosis type 1 (NF1) affects several areas of cognitive function including visual processing and attention. We investigated the neural mechanisms underlying the visual deficits of children and adolescents with NF1 by studying visual evoked potentials (VEPs) and brain oscillations during visual stimulation and rest periods. Methods Electroencephalogram/event-related potential (EEG/ERP) responses were measured during visual processing (NF1 n = 17; controls n = 19) and idle periods with eyes closed and eyes open (NF1 n = 12; controls n = 14). Visual stimulation was chosen to bias activation of the three detection mechanisms: achromatic, red-green and blue-yellow. Results We found significant differences between the groups for late chromatic VEPs and a specific enhancement in the amplitude of the parieto-occipital alpha amplitude both during visual stimulation and idle periods. Alpha modulation and the negative influence of alpha oscillations in visual performance were found in both groups. Conclusions Our findings suggest abnormal later stages of visual processing and enhanced amplitude of alpha oscillations supporting the existence of deficits in basic sensory processing in NF1. Given the link between alpha oscillations, visual perception and attention, these results indicate a neural mechanism that might underlie the visual sensitivity deficits and increased lapses of attention observed in individuals with NF1. PMID:24559228
2013-06-01
1 Visualizing Patterns of Drug Prescriptions with EventFlow: A Pilot Study of Asthma Medications in the...asthmatics within the Military Health System (MHS). Visualizing the patterns of asthma medication use surrounding a LABA prescription is a quick way to...random sample of 100 asthma patients under age 65 with a new LABA prescription from January 1, 2006-March 1, 2010 in MHS healthcare claims. Analysis was
NASA Astrophysics Data System (ADS)
Inoue, Y.; Tsuruoka, K.; Arikawa, M.
2014-04-01
In this paper, we proposed a user interface that displays visual animations on geographic maps and timelines for depicting historical stories by representing causal relationships among events for time series. We have been developing an experimental software system for the spatial-temporal visualization of historical stories for tablet computers. Our proposed system makes people effectively learn historical stories using visual animations based on hierarchical structures of different scale timelines and maps.
Multi-chamber nucleic acid amplification and detection device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dugan, Lawrence
A nucleic acid amplification and detection device includes an amplification cartridge with a plurality of reaction chambers for containing an amplification reagent and a visual detection reagent, and a plurality of optically transparent view ports for viewing inside the reaction chambers. The cartridge also includes a sample receiving port which is adapted to receive a fluid sample and fluidically connected to distribute the fluid sample to the reaction chamber, and in one embodiment, a plunger is carried by the cartridge for occluding fluidic communication to the reaction chambers. The device also includes a heating apparatus having a heating element whichmore » is activated by controller to generate heat when a trigger event is detected. The heating apparatus includes a cartridge-mounting section which positioned a cartridge in thermal communication with the heating element so that visual changes to the contents of the reaction chambers are viewable through the view ports.« less
Recognition memory is modulated by visual similarity.
Yago, Elena; Ishai, Alumit
2006-06-01
We used event-related fMRI to test whether recognition memory depends on visual similarity between familiar prototypes and novel exemplars. Subjects memorized portraits, landscapes, and abstract compositions by six painters with a unique style, and later performed a memory recognition task. The prototypes were presented with new exemplars that were either visually similar or dissimilar. Behaviorally, novel, dissimilar items were detected faster and more accurately. We found activation in a distributed cortical network that included face- and object-selective regions in the visual cortex, where familiar prototypes evoked stronger responses than new exemplars; attention-related regions in parietal cortex, where responses elicited by new exemplars were reduced with decreased similarity to the prototypes; and the hippocampus and memory-related regions in parietal and prefrontal cortices, where stronger responses were evoked by the dissimilar exemplars. Our findings suggest that recognition memory is mediated by classification of novel exemplars as a match or a mismatch, based on their visual similarity to familiar prototypes.
A Catalog of Coronal "EIT Wave" Transients
NASA Technical Reports Server (NTRS)
Thompson, B. J.; Myers, D. C.
2009-01-01
Solar and Heliospheric Observatory (SOHO) Extreme ultraviolet Imaging Telescope (EIT) data have been visually searched for coronal "EIT wave" transients over the period beginning from 1997 March 24 and extending through 1998 June 24. The dates covered start at the beginning of regular high-cadence (more than one image every 20 minutes) observations, ending at the four-month interruption of SOHO observations in mid-1998. One hundred and seventy six events are included in this catalog. The observations range from "candidate" events, which were either weak or had insufficient data coverage, to events which were well defined and were clearly distinguishable in the data. Included in the catalog are times of the EIT images in which the events are observed, diagrams indicating the observed locations of the wave fronts and associated active regions, and the speeds of the wave fronts. The measured speeds of the wave fronts varied from less than 50 to over 700 km s(exp -1) with "typical" speeds of 200-400 km s(exp -1).
Content Representation in the Human Medial Temporal Lobe
Liang, Jackson C.; Wagner, Anthony D.
2013-01-01
Current theories of medial temporal lobe (MTL) function focus on event content as an important organizational principle that differentiates MTL subregions. Perirhinal and parahippocampal cortices may play content-specific roles in memory, whereas hippocampal processing is alternately hypothesized to be content specific or content general. Despite anatomical evidence for content-specific MTL pathways, empirical data for content-based MTL subregional dissociations are mixed. Here, we combined functional magnetic resonance imaging with multiple statistical approaches to characterize MTL subregional responses to different classes of novel event content (faces, scenes, spoken words, sounds, visual words). Univariate analyses revealed that responses to novel faces and scenes were distributed across the anterior–posterior axis of MTL cortex, with face responses distributed more anteriorly than scene responses. Moreover, multivariate pattern analyses of perirhinal and parahippocampal data revealed spatially organized representational codes for multiple content classes, including nonpreferred visual and auditory stimuli. In contrast, anterior hippocampal responses were content general, with less accurate overall pattern classification relative to MTL cortex. Finally, posterior hippocampal activation patterns consistently discriminated scenes more accurately than other forms of content. Collectively, our findings indicate differential contributions of MTL subregions to event representation via a distributed code along the anterior–posterior axis of MTL that depends on the nature of event content. PMID:22275474
Aptel, Florent; Aryal-Charles, Nischal; Tamisier, Renaud; Pépin, Jean-Louis; Lesoin, Antoine; Chiquet, Christophe
2017-06-01
To evaluate whether obstructive sleep apnea (OSA) is responsible for the visual field defects found in the fellow eyes of patients with non-arteritic ischemic optic neuropathy (NAION). Prospective cross-sectional study. The visual fields of the fellow eyes of NAION subjects with OSA were compared to the visual fields of control OSA patients matched for OSA severity. All patients underwent comprehensive ophthalmological and general examination including Humphrey 24.2 SITA-Standard visual field and polysomnography. Visual field defects were classified according the Ischemic Optic Neuropathy Decompression Trial (IONDT) classification. From a cohort of 78 consecutive subjects with NAION, 34 unaffected fellow eyes were compared to 34 control eyes of subjects matched for OSA severity (apnea-hypopnea index [AHI] 35.5 ± 11.6 vs 35.4 ± 9.4 events per hour, respectively, p = 0.63). After adjustment for age and body mass index, all visual field parameters were significantly different between the NAION fellow eyes and those of the control OSA groups, including mean deviation (-4.5 ± 3.7 vs -1.3 ± 1.8 dB, respectively, p < 0.05), visual field index (91.6 ± 10 vs 97.4 ± 3.5%, respectively, p = 0.002), pattern standard deviation (3.7 ± 2.3 vs 2.5 ± 2 dB, respectively, p = 0.015), and number of subjects with at least one defect on the IONDT classification (20 vs 10, respectively, p < 0.05). OSA alone does not explain the visual field defects frequently found in the fellow eyes of NAION patients.
Aging memories: differential decay of episodic memory components.
Talamini, Lucia M; Gorree, Eva
2012-05-17
Some memories about events can persist for decades, even a lifetime. However, recent memories incorporate rich sensory information, including knowledge on the spatial and temporal ordering of event features, while old memories typically lack this "filmic" quality. We suggest that this apparent change in the nature of memories may reflect a preferential loss of hippocampus-dependent, configurational information over more cortically based memory components, including memory for individual objects. The current study systematically tests this hypothesis, using a new paradigm that allows the contemporaneous assessment of memory for objects, object pairings, and object-position conjunctions. Retention of each memory component was tested, at multiple intervals, up to 3 mo following encoding. The three memory subtasks adopted the same retrieval paradigm and were matched for initial difficulty. Results show differential decay of the tested episodic memory components, whereby memory for configurational aspects of a scene (objects' co-occurrence and object position) decays faster than memory for featured objects. Interestingly, memory requiring a visually detailed object representation decays at a similar rate as global object recognition, arguing against interpretations based on task difficulty and against the notion that (visual) detail is forgotten preferentially. These findings show that memories undergo qualitative changes as they age. More specifically, event memories become less configurational over time, preferentially losing some of the higher order associations that are dependent on the hippocampus for initial fast encoding. Implications for theories of long-term memory are discussed.
IsoPlot: a database for comparison of mRNA isoforms in fruit fly and mosquitoes
Ng, I-Man; Tsai, Shang-Chi
2017-01-01
Abstract Alternative splicing (AS), a mechanism by which different forms of mature messenger RNAs (mRNAs) are generated from the same gene, widely occurs in the metazoan genomes. Knowledge about isoform variants and abundance is crucial for understanding the functional context in the molecular diversity of the species. With increasing transcriptome data of model and non-model species, a database for visualization and comparison of AS events with up-to-date information is needed for further research. IsoPlot is a publicly available database with visualization tools for exploration of AS events, including three major species of mosquitoes, Aedes aegypti, Anopheles gambiae, and Culex quinquefasciatus, and fruit fly Drosophila melanogaster, the model insect species. IsoPlot includes not only 88,663 annotated transcripts but also 17,037 newly predicted transcripts from massive transcriptome data at different developmental stages of mosquitoes. The web interface enables users to explore the patterns and abundance of isoforms in different experimental conditions as well as cross-species sequence comparison of orthologous transcripts. IsoPlot provides a platform for researchers to access comprehensive information about AS events in mosquitoes and fruit fly. Our database is available on the web via an interactive user interface with an intuitive graphical design, which is applicable for the comparison of complex isoforms within or between species. Database URL: http://isoplot.iis.sinica.edu.tw/ PMID:29220459
Looking Inward and Back: Real-Time Monitoring of Visual Working Memories
ERIC Educational Resources Information Center
Suchow, Jordan W.; Fougnie, Daryl; Alvarez, George A.
2017-01-01
Confidence in our memories is influenced by many factors, including beliefs about the perceptibility or memorability of certain kinds of objects and events, as well as knowledge about our skill sets, habits, and experiences. Notoriously, our knowledge and beliefs about memory can lead us astray, causing us to be overly confident in eyewitness…
The informativity of sound modulates crossmodal facilitation of visual discrimination: a fMRI study.
Li, Qi; Yu, Hongtao; Li, Xiujun; Sun, Hongzan; Yang, Jingjing; Li, Chunlin
2017-01-18
Many studies have investigated behavioral crossmodal facilitation when a visual stimulus is accompanied by a concurrent task-irrelevant sound. Lippert and colleagues reported that a concurrent task-irrelevant sound reduced the uncertainty of the timing of the visual display and improved perceptional responses (informative sound). However, the neural mechanism by which the informativity of sound affected crossmodal facilitation of visual discrimination remained unclear. In this study, we used event-related functional MRI to investigate the neural mechanisms underlying the role of informativity of sound in crossmodal facilitation of visual discrimination. Significantly faster reaction times were observed when there was an informative relationship between auditory and visual stimuli. The functional MRI results showed sound informativity-induced activation enhancement including the left fusiform gyrus and the right lateral occipital complex. Further correlation analysis showed that the right lateral occipital complex was significantly correlated with the behavioral benefit in reaction times. This suggests that this region was modulated by the informative relationship within audiovisual stimuli that was learnt during the experiment, resulting in late-stage multisensory integration and enhanced behavioral responses.
Visual Prediction in Infancy: What Is the Association with Later Vocabulary?
ERIC Educational Resources Information Center
Ellis, Erica M.; Gonzalez, Marybel Robledo; Deák, Gedeon O.
2014-01-01
Young infants can learn statistical regularities and patterns in sequences of events. Studies have demonstrated a relationship between early sequence learning skills and later development of cognitive and language skills. We investigated the relation between infants' visual response speed to novel event sequences, and their later receptive and…
Visual imagery in autobiographical memory: The role of repeated retrieval in shifting perspective
Butler, Andrew C.; Rice, Heather J.; Wooldridge, Cynthia L.; Rubin, David C.
2016-01-01
Recent memories are generally recalled from a first-person perspective whereas older memories are often recalled from a third-person perspective. We investigated how repeated retrieval affects the availability of visual information, and whether it could explain the observed shift in perspective with time. In Experiment 1, participants performed mini-events and nominated memories of recent autobiographical events in response to cue words. Next, they described their memory for each event and rated its phenomenological characteristics. Over the following three weeks, they repeatedly retrieved half of the mini-event and cue-word memories. No instructions were given about how to retrieve the memories. In Experiment 2, participants were asked to adopt either a first- or third-person perspective during retrieval. One month later, participants retrieved all of the memories and again provided phenomenology ratings. When first-person visual details from the event were repeatedly retrieved, this information was retained better and the shift in perspective was slowed. PMID:27064539
Neural Correlates of Intersensory Processing in Five-Month-Old Infants
Reynolds, Greg D.; Bahrick, Lorraine E.; Lickliter, Robert; Guy, Maggie W.
2014-01-01
Two experiments assessing event-related potentials in 5-month-old infants were conducted to examine neural correlates of attentional salience and efficiency of processing of a visual event (woman speaking) paired with redundant (synchronous) speech, nonredundant (asynchronous) speech, or no speech. In Experiment 1, the Nc component associated with attentional salience was greater in amplitude following synchronous audiovisual as compared with asynchronous audiovisual and unimodal visual presentations. A block design was utilized in Experiment 2 to examine efficiency of processing of a visual event. Only infants exposed to synchronous audiovisual speech demonstrated a significant reduction in amplitude of the late slow wave associated with successful stimulus processing and recognition memory from early to late blocks of trials. These findings indicate that events that provide intersensory redundancy are associated with enhanced neural responsiveness indicative of greater attentional salience and more efficient stimulus processing as compared with the same events when they provide no intersensory redundancy in 5-month-old infants. PMID:23423948
pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis.
Giannakopoulos, Theodoros
2015-01-01
Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library.
pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis
Giannakopoulos, Theodoros
2015-01-01
Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library. PMID:26656189
A Review of Visual Representations of Physiologic Data
2016-01-01
Background Physiological data is derived from electrodes attached directly to patients. Modern patient monitors are capable of sampling data at frequencies in the range of several million bits every hour. Hence the potential for cognitive threat arising from information overload and diminished situational awareness becomes increasingly relevant. A systematic review was conducted to identify novel visual representations of physiologic data that address cognitive, analytic, and monitoring requirements in critical care environments. Objective The aims of this review were to identify knowledge pertaining to (1) support for conveying event information via tri-event parameters; (2) identification of the use of visual variables across all physiologic representations; (3) aspects of effective design principles and methodology; (4) frequency of expert consultations; (5) support for user engagement and identifying heuristics for future developments. Methods A review was completed of papers published as of August 2016. Titles were first collected and analyzed using an inclusion criteria. Abstracts resulting from the first pass were then analyzed to produce a final set of full papers. Each full paper was passed through a data extraction form eliciting data for comparative analysis. Results In total, 39 full papers met all criteria and were selected for full review. Results revealed great diversity in visual representations of physiological data. Visual representations spanned 4 groups including tabular, graph-based, object-based, and metaphoric displays. The metaphoric display was the most popular (n=19), followed by waveform displays typical to the single-sensor-single-indicator paradigm (n=18), and finally object displays (n=9) that utilized spatiotemporal elements to highlight changes in physiologic status. Results obtained from experiments and evaluations suggest specifics related to the optimal use of visual variables, such as color, shape, size, and texture have not been fully understood. Relationships between outcomes and the users’ involvement in the design process also require further investigation. A very limited subset of visual representations (n=3) support interactive functionality for basic analysis, while only one display allows the user to perform analysis including more than one patient. Conclusions Results from the review suggest positive outcomes when visual representations extend beyond the typical waveform displays; however, there remain numerous challenges. In particular, the challenge of extensibility limits their applicability to certain subsets or locations, challenge of interoperability limits its expressiveness beyond physiologic data, and finally the challenge of instantaneity limits the extent of interactive user engagement. PMID:27872033
Sounds Activate Visual Cortex and Improve Visual Discrimination
Störmer, Viola S.; Martinez, Antigona; McDonald, John J.; Hillyard, Steven A.
2014-01-01
A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. PMID:25031419
Sinha, Subijay
2007-01-01
Background: To report the anatomic and visual acuity response after intravitreal bevacizumab (Avastin) in patients with diffuse diabetic macular edema. Design: Prospective, interventional case series study. Materials and Methods: This study included 20 eyes of metabolically stable diabetes mellitus with diffuse diabetic macular edema with a mean age of 59 years who were treated with two intravitreal injections of bevacizumab 1.25 mg in 0.05 ml six weeks apart. Main outcome measures were 1) early treatment diabetic retinopathy study visual acuity, 2) central macular thickness by optical coherence tomography imaging. Each was evaluated at baseline and follow-up visits. Results: All the eyes had received some form of laser photocoagulation before (not less than six months ago), but all of these patients had persistent diffuse macular edema with no improvement in visual acuity. All the patients received two injections of bevacizumab at an interval of six weeks per eye. No adverse events were observed, including endophthalmitis, inflammation and increased intraocular pressure or thromboembolic events in any patient. The mean baseline acuity was 20/494 (log Mar=1.338±0.455) and the mean acuity at three months following the second intravitreal injection was 20/295 (log Mar=1.094±0.254), a difference that was highly significant ( P =0.008). The mean central macular thickness at baseline was 492 µm which decreased to 369 µm ( P =0.001) at the end of six months. Conclusions: Initial treatment results of patients with diffuse diabetic macular edema not responding to previous photocoagulation did not reveal any short-term safety concerns. Intravitreal bevacizumab resulted in a significant decrease in macular thickness and improvement in visual acuity at three months but the effect was somewhat blunted, though still statistically significant at the end of six months. PMID:17951903
ATLAS Eventlndex monitoring system using the Kibana analytics and visualization platform
NASA Astrophysics Data System (ADS)
Barberis, D.; Cárdenas Zárate, S. E.; Favareto, A.; Fernandez Casani, A.; Gallas, E. J.; Garcia Montoro, C.; Gonzalez de la Hoz, S.; Hrivnac, J.; Malon, D.; Prokoshin, F.; Salt, J.; Sanchez, J.; Toebbicke, R.; Yuan, R.; ATLAS Collaboration
2016-10-01
The ATLAS EventIndex is a data catalogue system that stores event-related metadata for all (real and simulated) ATLAS events, on all processing stages. As it consists of different components that depend on other applications (such as distributed storage, and different sources of information) we need to monitor the conditions of many heterogeneous subsystems, to make sure everything is working correctly. This paper describes how we gather information about the EventIndex components and related subsystems: the Producer-Consumer architecture for data collection, health parameters from the servers that run EventIndex components, EventIndex web interface status, and the Hadoop infrastructure that stores EventIndex data. This information is collected, processed, and then displayed using CERN service monitoring software based on the Kibana analytic and visualization package, provided by CERN IT Department. EventIndex monitoring is used both by the EventIndex team and ATLAS Distributed Computing shifts crew.
Graphical programming interface: A development environment for MRI methods.
Zwart, Nicholas R; Pipe, James G
2015-11-01
To introduce a multiplatform, Python language-based, development environment called graphical programming interface for prototyping MRI techniques. The interface allows developers to interact with their scientific algorithm prototypes visually in an event-driven environment making tasks such as parameterization, algorithm testing, data manipulation, and visualization an integrated part of the work-flow. Algorithm developers extend the built-in functionality through simple code interfaces designed to facilitate rapid implementation. This article shows several examples of algorithms developed in graphical programming interface including the non-Cartesian MR reconstruction algorithms for PROPELLER and spiral as well as spin simulation and trajectory visualization of a FLORET example. The graphical programming interface framework is shown to be a versatile prototyping environment for developing numeric algorithms used in the latest MR techniques. © 2014 Wiley Periodicals, Inc.
Dividing time: concurrent timing of auditory and visual events by young and elderly adults.
McAuley, J Devin; Miller, Jonathan P; Wang, Mo; Pang, Kevin C H
2010-07-01
This article examines age differences in individual's ability to produce the durations of learned auditory and visual target events either in isolation (focused attention) or concurrently (divided attention). Young adults produced learned target durations equally well in focused and divided attention conditions. Older adults, in contrast, showed an age-related increase in timing variability in divided attention conditions that tended to be more pronounced for visual targets than for auditory targets. Age-related impairments were associated with a decrease in working memory span; moreover, the relationship between working memory and timing performance was largest for visual targets in divided attention conditions.
ERIC Educational Resources Information Center
Dilek, Gulcin
2010-01-01
This study aims to explore the visual thinking skills of some sixth grade (12-13 year-old) primary pupils who created visual interpretations during history courses. Pupils drew pictures describing historical scenes or events based on visual sources. They constructed these illustrations by using visual and written primary and secondary sources in…
Conferences and convention centres' accessibility to people with disabilities.
Doshi, Jasmine Khandhar; Furlan, Andréa Dompieri; Lopes, Luis Carlos; DeLisa, Joel; Battistella, Linamara Rizzo
2014-07-01
The purposes of this manuscript are to create awareness of problems of accessibility at meetings and conferences for people with disabilities, and to provide a checklist for organizers of conferences to make the event more accessible to people with disabilities. We conducted a search of the grey literature for conference centres and venues that had recommendations for making the event more accessible. The types of disability included in this manuscript are those as a consequence of visual, hearing and mobility impairments. We provide a checklist to make meetings accessible to people with disabilities. The checklist is divided into sections related to event planning, venue accessibility, venue staff, invitations/registrations, greeting people with a disability, actions during the event, and suggestions for effective presenters. The checklist can be used by prospective organizers of conferences to plan an event and to ensure inclusion and participation of people with disabilities.
Moore, Andrew J; Richardson, Jane C; Bernard, Miriam; Sim, Julius
2018-02-26
Medical science and other sources, such as the media, increasingly inform the general public's understanding of disease. There is often discordance between this understanding and the diagnostic interpretations of health care practitioners (HCPs). In this paper - based on a supra-analysis of qualitative interview data from two studies of joint pain, including osteoarthritis - we investigate how people imagine and make sense of the pathophysiology of their illness, and how these understandings may affect self-management behavior. We then explore how HCPs' use of medical images and models can inform patients' understanding. In conceptualizing their illness to make sense of their experience of the disease, individuals often used visualizations of their inner body; these images may arise from their own lay understanding, or may be based on images provided by HCPs. When HCPs used anatomical models or medical images judiciously, patients' orientation to their illness changed. Including patients in a more collaborative diagnostic event that uses medical images and visual models to support explanations about their condition may help them to achieve a more meaningful understanding of their illness and to manage their condition more effectively. Implications for Rehabilitation Chronic musculoskeletal pain is a leading cause of pain and years lived with disability, and despite its being common, patients and healthcare professionals often have a different understanding of the underlying disease. An individual's understanding of his or her pathophysiology plays an important role in making sense of painful joint conditions and in decision-making about self-management and care. Including patients in a more collaborative diagnostic event using medical images and anatomical models to support explanations about their symptoms may help them to better understand their condition and manage it more effectively. Using visually informed explanations and anatomical models may also help to reassure patients about the safety and effectiveness of core treatments such as physical exercise and thereby help restore or improve patients' activity levels and return to social participation.
Personal sleep pattern visualization using sequence-based kernel self-organizing map on sound data.
Wu, Hongle; Kato, Takafumi; Yamada, Tomomi; Numao, Masayuki; Fukui, Ken-Ichi
2017-07-01
We propose a method to discover sleep patterns via clustering of sound events recorded during sleep. The proposed method extends the conventional self-organizing map algorithm by kernelization and sequence-based technologies to obtain a fine-grained map that visualizes the distribution and changes of sleep-related events. We introduced features widely applied in sound processing and popular kernel functions to the proposed method to evaluate and compare performance. The proposed method provides a new aspect of sleep monitoring because the results demonstrate that sound events can be directly correlated to an individual's sleep patterns. In addition, by visualizing the transition of cluster dynamics, sleep-related sound events were found to relate to the various stages of sleep. Therefore, these results empirically warrant future study into the assessment of personal sleep quality using sound data. Copyright © 2017 Elsevier B.V. All rights reserved.
A Framework to Understand Extreme Space Weather Event Probability.
Jonas, Seth; Fronczyk, Kassandra; Pratt, Lucas M
2018-03-12
An extreme space weather event has the potential to disrupt or damage infrastructure systems and technologies that many societies rely on for economic and social well-being. Space weather events occur regularly, but extreme events are less frequent, with a small number of historical examples over the last 160 years. During the past decade, published works have (1) examined the physical characteristics of the extreme historical events and (2) discussed the probability or return rate of select extreme geomagnetic disturbances, including the 1859 Carrington event. Here we present initial findings on a unified framework approach to visualize space weather event probability, using a Bayesian model average, in the context of historical extreme events. We present disturbance storm time (Dst) probability (a proxy for geomagnetic disturbance intensity) across multiple return periods and discuss parameters of interest to policymakers and planners in the context of past extreme space weather events. We discuss the current state of these analyses, their utility to policymakers and planners, the current limitations when compared to other hazards, and several gaps that need to be filled to enhance space weather risk assessments. © 2018 Society for Risk Analysis.
On the use of orientation filters for 3D reconstruction in event-driven stereo vision
Camuñas-Mesa, Luis A.; Serrano-Gotarredona, Teresa; Ieng, Sio H.; Benosman, Ryad B.; Linares-Barranco, Bernabe
2014-01-01
The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction. PMID:24744694
Development of a 3-D Nuclear Event Visualization Program Using Unity
NASA Astrophysics Data System (ADS)
Kuhn, Victoria
2017-09-01
Simulations have become increasingly important for science and there is an increasing emphasis on the visualization of simulations within a Virtual Reality (VR) environment. Our group is exploring this capability as a visualization tool not just for those curious about science, but also for educational purposes for K-12 students. Using data collected in 3-D by a Time Projection Chamber (TPC), we are able to visualize nuclear and cosmic events. The Unity game engine was used to recreate the TPC to visualize these events and construct a VR application. The methods used to create these simulations will be presented along with an example of a simulation. I will also present on the development and testing of this program, which I carried out this past summer at MSU as part of an REU program. We used data from the S πRIT TPC, but the software can be applied to other 3-D detectors. This work is supported by the U.S. Department of Energy under Grant Nos. DE-SC0014530, DE-NA0002923 and US NSF under Grant No. PHY-1565546.
Visualization techniques for computer network defense
NASA Astrophysics Data System (ADS)
Beaver, Justin M.; Steed, Chad A.; Patton, Robert M.; Cui, Xiaohui; Schultz, Matthew
2011-06-01
Effective visual analysis of computer network defense (CND) information is challenging due to the volume and complexity of both the raw and analyzed network data. A typical CND is comprised of multiple niche intrusion detection tools, each of which performs network data analysis and produces a unique alerting output. The state-of-the-practice in the situational awareness of CND data is the prevalent use of custom-developed scripts by Information Technology (IT) professionals to retrieve, organize, and understand potential threat events. We propose a new visual analytics framework, called the Oak Ridge Cyber Analytics (ORCA) system, for CND data that allows an operator to interact with all detection tool outputs simultaneously. Aggregated alert events are presented in multiple coordinated views with timeline, cluster, and swarm model analysis displays. These displays are complemented with both supervised and semi-supervised machine learning classifiers. The intent of the visual analytics framework is to improve CND situational awareness, to enable an analyst to quickly navigate and analyze thousands of detected events, and to combine sophisticated data analysis techniques with interactive visualization such that patterns of anomalous activities may be more easily identified and investigated.
Surprise-Induced Blindness: A Stimulus-Driven Attentional Limit to Conscious Perception
ERIC Educational Resources Information Center
Asplund, Christopher L.; Todd, J. Jay; Snyder, A. P.; Gilbert, Christopher M.; Marois, Rene
2010-01-01
The cost of attending to a visual event can be the failure to consciously detect other events. This processing limitation is well illustrated by the attentional blink paradigm, in which searching for and attending to a target presented in a rapid serial visual presentation stream of distractors can impair one's ability to detect a second target…
How Object-Specific Are Object Files? Evidence for Integration by Location
ERIC Educational Resources Information Center
van Dam, Wessel O.; Hommel, Bernhard
2010-01-01
Given the distributed representation of visual features in the human brain, binding mechanisms are necessary to integrate visual information about the same perceptual event. It has been assumed that feature codes are bound into object files--pointers to the neural codes of the features of a given event. The present study investigated the…
ERIC Educational Resources Information Center
Shibley, Ralph, Jr.; And Others
Event-related Potentials (ERPs) were recorded to both auditory and visual stimuli from the scalps of nine autistic males and nine normal controls (all Ss between 12 and 22 years of age) to examine the differences in information processing strategies. Ss were tested on three different tasks: an auditory missing stimulus paradigm, a visual color…
Effects of Audio-Visual Integration on the Detection of Masked Speech and Non-Speech Sounds
ERIC Educational Resources Information Center
Eramudugolla, Ranmalee; Henderson, Rachel; Mattingley, Jason B.
2011-01-01
Integration of simultaneous auditory and visual information about an event can enhance our ability to detect that event. This is particularly evident in the perception of speech, where the articulatory gestures of the speaker's lips and face can significantly improve the listener's detection and identification of the message, especially when that…
How (and why) the visual control of action differs from visual perception
Goodale, Melvyn A.
2014-01-01
Vision not only provides us with detailed knowledge of the world beyond our bodies, but it also guides our actions with respect to objects and events in that world. The computations required for vision-for-perception are quite different from those required for vision-for-action. The former uses relational metrics and scene-based frames of reference while the latter uses absolute metrics and effector-based frames of reference. These competing demands on vision have shaped the organization of the visual pathways in the primate brain, particularly within the visual areas of the cerebral cortex. The ventral ‘perceptual’ stream, projecting from early visual areas to inferior temporal cortex, helps to construct the rich and detailed visual representations of the world that allow us to identify objects and events, attach meaning and significance to them and establish their causal relations. By contrast, the dorsal ‘action’ stream, projecting from early visual areas to the posterior parietal cortex, plays a critical role in the real-time control of action, transforming information about the location and disposition of goal objects into the coordinate frames of the effectors being used to perform the action. The idea of two visual systems in a single brain might seem initially counterintuitive. Our visual experience of the world is so compelling that it is hard to believe that some other quite independent visual signal—one that we are unaware of—is guiding our movements. But evidence from a broad range of studies from neuropsychology to neuroimaging has shown that the visual signals that give us our experience of objects and events in the world are not the same ones that control our actions. PMID:24789899
Suyama, Natsuka; Hoshiyama, Minoru; Shimizu, Hideki; Saito, Hirofumi
2008-09-01
The event-related potentials (ERP) following presentation of male and female faces were investigated to study differences in the gender discrimination process. Visual stimuli from four categories including male and female faces were presented. For the male subjects, the P220 amplitude of the T5 area following viewing of a female face was significantly larger than that following viewing of a male face. On the other hand for female subjects, the P170 amplitude of the Cz area following observation of a male face was larger than that for a female face. The results indicate that the neural processes, including responsive brain areas used for gender discrimination by observing faces, are different between males and females.
Prospective randomized trial: outcomes of SF₆ versus C₃F₈ in macular hole surgery.
Briand, Sophie; Chalifoux, Emmanuelle; Tourville, Eric; Bourgault, Serge; Caissie, Mathieu; Tardif, Yvon; Giasson, Marcelle; Boivin, Jocelyne; Blanchette, Caty; Cinq-Mars, Benoit
2015-04-01
To compare macular hole (MH) closure and visual acuity improvement after vitrectomy using SF6 versus C3F8 gas tamponade. The secondary purposes were to report the cumulative incidence of cataract development at 1 year after MH surgery and the proportion of complications. Prospective, randomized study. Thirty-one patients were prospectively randomized to the SF6 group and 28 patients to the C3F8 group. Preoperative data included MH minimum diameter, Early Treatment Diabetic Retinopathy Study (ETDRS) best corrected visual acuity (BCVA), cataract staging, and intraocular pressure (IOP) measurement. Postoperative data included optical coherence tomography confirmation of the closure at 6 weeks and 1 year, and ETDRS BCVA and cataract development/extraction, both 1 year after the MH surgery. Primary MH closure was achieved in 93.3% in the SF6 group and 92.9% in the C3F8 group. Mean ETDRS BCVA improved by 17.7 letters in the SF6 and 16.9 letters in the C3F8 group. The difference in cumulative incidence of cataract development and extraction between both groups was not statistically significant. Regardless of the dye used, similar results were achieved. Finally, the proportion of adverse events was similar in both groups. MH surgery with SF6 gas achieves results similar to C3F8 in terms of visual acuity improvement, MH closure, cataract development/extraction, and adverse events. Copyright © 2015 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.
Neuroprotective Strategies for the Treatment of Blast-Induced Optic Neuropathy
2016-09-01
will examine alterations in the amacrine cells and ganglion cells as well as therapeutic outcome measures including electroretinogram, visual evoked...nerve degeneration.1-3 This suggests that degeneration of the retinal ganglion cell (RGC) axons in the optic nerve is a secondary event. Secondary...for neurodegenerations from trauma extending beyond optic neuropathy. 2. Keywords: retinal ganglion cell (RGC), traumatic optic neuropathy
Wilkinson, Krista M; Light, Janice
2014-06-01
Visual scene displays (VSDs) are a form of augmentative and alternative communication display in which language concepts are embedded into an image of a naturalistic event. VSDs are based on the theory that language learning occurs through interactions with other people, and recommendations for VSD design have emphasized using images of these events that include humans. However, many VSDs also include other items that could potentially be distracting. We examined gaze fixation in 18 school-aged participants with and without severe intellectual/developmental disabilities (i.e., individuals with typical development, autism, Down syndrome and other intellectual disabilities) while they viewed photographs with human figures of various sizes and locations in the image, appearing alongside other interesting, and potentially distracting items. In all groups, the human figures attracted attention rapidly (within 1.5 seconds). The proportions of each participant's own fixation time spent on the human figures were similar across all groups, as were the proportions of total fixations made to the human figures. Although the findings are preliminary, this initial evidence supports the inclusion of humans in VSD images.
Memory as Perception of the Past: Compressed Time inMind and Brain.
Howard, Marc W
2018-02-01
In the visual system retinal space is compressed such that acuity decreases further from the fovea. Different forms of memory may rely on a compressed representation of time, manifested as decreased accuracy for events that happened further in the past. Neurophysiologically, "time cells" show receptive fields in time. Analogous to the compression of visual space, time cells show less acuity for events further in the past. Behavioral evidence suggests memory can be accessed by scanning a compressed temporal representation, analogous to visual search. This suggests a common computational language for visual attention and memory retrieval. In this view, time functions like a scaffolding that organizes memories in much the same way that retinal space functions like a scaffolding for visual perception. Copyright © 2017 Elsevier Ltd. All rights reserved.
An Offline-Online Android Application for Hazard Event Mapping Using WebGIS Open Source Technologies
NASA Astrophysics Data System (ADS)
Olyazadeh, Roya; Jaboyedoff, Michel; Sudmeier-Rieux, Karen; Derron, Marc-Henri; Devkota, Sanjaya
2016-04-01
Nowadays, Free and Open Source Software (FOSS) plays an important role in better understanding and managing disaster risk reduction around the world. National and local government, NGOs and other stakeholders are increasingly seeking and producing data on hazards. Most of the hazard event inventories and land use mapping are based on remote sensing data, with little ground truthing, creating difficulties depending on the terrain and accessibility. Open Source WebGIS tools offer an opportunity for quicker and easier ground truthing of critical areas in order to analyse hazard patterns and triggering factors. This study presents a secure mobile-map application for hazard event mapping using Open Source WebGIS technologies such as Postgres database, Postgis, Leaflet, Cordova and Phonegap. The objectives of this prototype are: 1. An Offline-Online android mobile application with advanced Geospatial visualisation; 2. Easy Collection and storage of events information applied services; 3. Centralized data storage with accessibility by all the service (smartphone, standard web browser); 4. Improving data management by using active participation in hazard event mapping and storage. This application has been implemented as a low-cost, rapid and participatory method for recording impacts from hazard events and includes geolocation (GPS data and Internet), visualizing maps with overlay of satellite images, viewing uploaded images and events as cluster points, drawing and adding event information. The data can be recorded in offline (Android device) or online version (all browsers) and consequently uploaded through the server whenever internet is available. All the events and records can be visualized by an administrator and made public after approval. Different user levels can be defined to access the data for communicating the information. This application was tested for landslides in post-earthquake Nepal but can be used for any other type of hazards such as flood, avalanche, etc. Keywords: Offline, Online, WebGIS Open source, Android, Hazard Event Mapping
Self-imagining enhances recognition memory in memory-impaired individuals with neurological damage.
Grilli, Matthew D; Glisky, Elizabeth L
2010-11-01
The ability to imagine an elaborative event from a personal perspective relies on several cognitive processes that may potentially enhance subsequent memory for the event, including visual imagery, semantic elaboration, emotional processing, and self-referential processing. In an effort to find a novel strategy for enhancing memory in memory-impaired individuals with neurological damage, we investigated the mnemonic benefit of a method we refer to as self-imagining-the imagining of an event from a realistic, personal perspective. Fourteen individuals with neurologically based memory deficits and 14 healthy control participants intentionally encoded neutral and emotional sentences under three instructions: structural-baseline processing, semantic processing, and self-imagining. Findings revealed a robust "self-imagination effect (SIE)," as self-imagination enhanced recognition memory relative to deep semantic elaboration in both memory-impaired individuals, F(1, 13) = 32.11, p < .001, η2 = .71; and healthy controls, F(1, 13) = 5.57, p < .05, η2 = .30. In addition, results indicated that mnemonic benefits of self-imagination were not limited by severity of the memory disorder nor were they related to self-reported vividness of visual imagery, semantic processing, or emotional content of the materials. The findings suggest that the SIE may depend on unique mnemonic mechanisms possibly related to self-referential processing and that imagining an event from a personal perspective makes that event particularly memorable even for those individuals with severe memory deficits. Self-imagining may thus provide an effective rehabilitation strategy for individuals with memory impairment.
The 2017 Total Solar Eclipse: Through the Eyes of NASA
NASA Astrophysics Data System (ADS)
Mayo, Louis; NASA Goddard Heliophysics Education Consortium
2017-10-01
The August 21st, 2017 Total Solar Eclipse Across America provided a unique opportunity to teach event-based science to nationwide audiences. NASA spent over three years planning space and Earth science education programs for informal audiences, undergraduate institutions, and life long learners to bring this celestial event to the public through the eyes of NASA. This talk outlines how NASA used its unique assets including mission scientists and engineers, space based assets, citizen science, educational technology, science visualization, and its wealth of science and technology partners to bring the eclipse to the country through multimedia, cross-discipline science activities, curricula, and media programing. Audience reach, impact, and lessons learned are detailed. Plans for similar events in 2018 and beyond are outlined.
Conscious experience and episodic memory: hippocampus at the crossroads.
Behrendt, Ralf-Peter
2013-01-01
If an instance of conscious experience of the seemingly objective world around us could be regarded as a newly formed event memory, much as an instance of mental imagery has the content of a retrieved event memory, and if, therefore, the stream of conscious experience could be seen as evidence for ongoing formation of event memories that are linked into episodic memory sequences, then unitary conscious experience could be defined as a symbolic representation of the pattern of hippocampal neuronal firing that encodes an event memory - a theoretical stance that may shed light into the mind-body and binding problems in consciousness research. Exceedingly detailed symbols that describe patterns of activity rapidly self-organizing, at each cycle of the θ rhythm, in the hippocampus are instances of unitary conscious experience that jointly constitute the stream of consciousness. Integrating object information (derived from the ventral visual stream and orbitofrontal cortex) with contextual emotional information (from the anterior insula) and spatial environmental information (from the dorsal visual stream), the hippocampus rapidly forms event codes that have the informational content of objects embedded in an emotional and spatiotemporally extending context. Event codes, formed in the CA3-dentate network for the purpose of their memorization, are not only contextualized but also allocentric representations, similarly to conscious experiences of events and objects situated in a seemingly objective and observer-independent framework of phenomenal space and time. Conscious perception, creating the spatially and temporally extending world that we perceive around us, is likely to be evolutionarily related to more fleeting and seemingly internal forms of conscious experience, such as autobiographical memory recall, mental imagery, including goal anticipation, and to other forms of externalized conscious experience, namely dreaming and hallucinations; and evidence pointing to an important contribution of the hippocampus to these conscious phenomena will be reviewed.
Health impact assessment of industrial development projects: a spatio-temporal visualization.
Winkler, Mirko S; Krieger, Gary R; Divall, Mark J; Singer, Burton H; Utzinger, Jürg
2012-05-01
Development and implementation of large-scale industrial projects in complex eco-epidemiological settings typically require combined environmental, social and health impact assessments. We present a generic, spatio-temporal health impact assessment (HIA) visualization, which can be readily adapted to specific projects and key stakeholders, including poorly literate communities that might be affected by consequences of a project. We illustrate how the occurrence of a variety of complex events can be utilized for stakeholder communication, awareness creation, interactive learning as well as formulating HIA research and implementation questions. Methodological features are highlighted in the context of an iron ore development in a rural part of Africa.
Audio-video decision support for patients: the documentary genré as a basis for decision aids.
Volandes, Angelo E; Barry, Michael J; Wood, Fiona; Elwyn, Glyn
2013-09-01
Decision support tools are increasingly using audio-visual materials. However, disagreement exists about the use of audio-visual materials as they may be subjective and biased. This is a literature review of the major texts for documentary film studies to extrapolate issues of objectivity and bias from film to decision support tools. The key features of documentary films are that they attempt to portray real events and that the attempted reality is always filtered through the lens of the filmmaker. The same key features can be said of decision support tools that use audio-visual materials. Three concerns arising from documentary film studies as they apply to the use of audio-visual materials in decision support tools include whose perspective matters (stakeholder bias), how to choose among audio-visual materials (selection bias) and how to ensure objectivity (editorial bias). Decision science needs to start a debate about how audio-visual materials are to be used in decision support tools. Simply because audio-visual materials may be subjective and open to bias does not mean that we should not use them. Methods need to be found to ensure consensus around balance and editorial control, such that audio-visual materials can be used. © 2011 John Wiley & Sons Ltd.
Audio‐video decision support for patients: the documentary genré as a basis for decision aids
Volandes, Angelo E.; Barry, Michael J.; Wood, Fiona; Elwyn, Glyn
2011-01-01
Abstract Objective Decision support tools are increasingly using audio‐visual materials. However, disagreement exists about the use of audio‐visual materials as they may be subjective and biased. Methods This is a literature review of the major texts for documentary film studies to extrapolate issues of objectivity and bias from film to decision support tools. Results The key features of documentary films are that they attempt to portray real events and that the attempted reality is always filtered through the lens of the filmmaker. The same key features can be said of decision support tools that use audio‐visual materials. Three concerns arising from documentary film studies as they apply to the use of audio‐visual materials in decision support tools include whose perspective matters (stakeholder bias), how to choose among audio‐visual materials (selection bias) and how to ensure objectivity (editorial bias). Discussion Decision science needs to start a debate about how audio‐visual materials are to be used in decision support tools. Simply because audio‐visual materials may be subjective and open to bias does not mean that we should not use them. Conclusion Methods need to be found to ensure consensus around balance and editorial control, such that audio‐visual materials can be used. PMID:22032516
Representing time in language and memory: the role of similarity structure.
Faber, Myrthe; Gennari, Silvia P
2015-03-01
Every day we read about or watch events in the world and can easily understand or remember how long they last. What aspects of an event are retained in memory? And how do we extract temporal information from our memory representations? These issues are central to human cognition, as they underlie a fundamental aspect of our mental life, namely our representation of time. This paper reviews previous language studies and reports a visual learning study indicating that properties of the events encoded in memory shape the representation of their duration. The evidence indicates that for a given event, the extent to which its associated properties or sub-components differ from one another modulates our representation of its duration. These properties include the similarity between sub-events and the similarity between the situational contexts in which an event occurs. We suggest that the diversity of representations that we associate with events in memory plays an important role in remembering and estimating the duration of experienced or described events. Copyright © 2014 Elsevier B.V. All rights reserved.
Orita, Sumihisa; Yamauchi, Kazuyo; Suzuki, Takane; Suzuki, Miyako; Sakuma, Yoshihiro; Kubota, Go; Oikawa, Yasuhiro; Sainoh, Takeshi; Sato, Jun; Fujimoto, Kazuki; Shiga, Yasuhiro; Abe, Koki; Kanamoto, Hirohito; Inoue, Masahiro; Kinoshita, Hideyuki; Takahashi, Kazuhisa; Ohtori, Seiji
2016-01-01
Study Design Retrospective study. Purpose To determine whether low-dose tramadol plus non-steroidal anti-inflammatory drug combination therapy could prevent the transition of acute low back pain to chronic low back pain. Overview of Literature Inadequately treated early low back pain transitions to chronic low back pain occur in approximately 30% of affected individuals. The administration of non-steroidal anti-inflammatory drugs is effective for treatment of low back pain in the early stages. However, the treatment of low back pain that is resistant to non-steroidal anti-inflammatory drugs is challenging. Methods Patients who presented with acute low back pain at our hospital were considered for inclusion in this study. After the diagnosis of acute low back pain, non-steroidal anti-inflammatory drug administration was started. Forty patients with a visual analog scale score of >5 for low back pain 1 month after treatment were finally enrolled. The first 20 patients were included in a non-steroidal anti-inflammatory drug group, and they continued non-steroidal anti-inflammatory drug therapy for 1 month. The next 20 patients were included in a combination group, and they received low-dose tramadol plus non-steroidal anti-inflammatory drug combination therapy for 1 month. The incidence of adverse events and the improvement in the visual analog scale score at 2 months after the start of treatment were analyzed. Results No adverse events were observed in the non-steroidal anti-inflammatory drug group. In the combination group, administration was discontinued in 2 patients (10%) due to adverse events immediately following the start of tramadol administration. At 2 months, the improvement in the visual analog scale score was greater in the combination group than in the non-steroidal anti-inflammatory drug group (p<0.001). Conclusions Low-dose tramadol plus non-steroidal anti-inflammatory drug combination therapy might decrease the incidence of adverse events and prevent the transition of acute low back pain to chronic low back pain. PMID:27559448
The rainfall plot: its motivation, characteristics and pitfalls.
Domanska, Diana; Vodák, Daniel; Lund-Andersen, Christin; Salvatore, Stefania; Hovig, Eivind; Sandve, Geir Kjetil
2017-05-18
A visualization referred to as rainfall plot has recently gained popularity in genome data analysis. The plot is mostly used for illustrating the distribution of somatic cancer mutations along a reference genome, typically aiming to identify mutation hotspots. In general terms, the rainfall plot can be seen as a scatter plot showing the location of events on the x-axis versus the distance between consecutive events on the y-axis. Despite its frequent use, the motivation for applying this particular visualization and the appropriateness of its usage have never been critically addressed in detail. We show that the rainfall plot allows visual detection even for events occurring at high frequency over very short distances. In addition, event clustering at multiple scales may be detected as distinct horizontal bands in rainfall plots. At the same time, due to the limited size of standard figures, rainfall plots might suffer from inability to distinguish overlapping events, especially when multiple datasets are plotted in the same figure. We demonstrate the consequences of plot congestion, which results in obscured visual data interpretations. This work provides the first comprehensive survey of the characteristics and proper usage of rainfall plots. We find that the rainfall plot is able to convey a large amount of information without any need for parameterization or tuning. However, we also demonstrate how plot congestion and the use of a logarithmic y-axis may result in obscured visual data interpretations. To aid the productive utilization of rainfall plots, we demonstrate their characteristics and potential pitfalls using both simulated and real data, and provide a set of practical guidelines for their proper interpretation and usage.
Friederichs, Edgar; Wahl, Siegfried
2017-08-01
The present investigation examined whether changes of electrophysiological late event related potential pattern could be used to reflect clinical changes from therapeutic intervention with coloured glasses in a group of patients with symptoms of central visual processing disorder. Subjects consisted of 13 patients with average age 16years (range 6-51years) with attention problems and learning disability, respectively. These patients were provided with specified coloured glasses which were required to be used during day time. Results indicated that specified coloured glasses significantly improved attention performance. Furthermore electrophysiological parameters revealed a significant change in the late event related potential distribution pattern (latency, amplitudes). This reflects a synchronization of together firing wired neural assemblies responsible for visual processing, suggesting an accelerated neuromaturation process when using coloured glasses. Our results suggest that the visual event related potentials measures are sensitive to changes in clinical development of patients with deficits of visual processing wearing appropriate coloured glasses. It will be discussed whether such a device might be useful for a clinical improvement of distraction symptoms caused by visual processing deficits. A model is presented explaining these effects by inducing the respiratory chain of the mitochondria such increasing the low energy levels of ATP of our patients. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cross-modal orienting of visual attention.
Hillyard, Steven A; Störmer, Viola S; Feng, Wenfeng; Martinez, Antigona; McDonald, John J
2016-03-01
This article reviews a series of experiments that combined behavioral and electrophysiological recording techniques to explore the hypothesis that salient sounds attract attention automatically and facilitate the processing of visual stimuli at the sound's location. This cross-modal capture of visual attention was found to occur even when the attracting sound was irrelevant to the ongoing task and was non-predictive of subsequent events. A slow positive component in the event-related potential (ERP) that was localized to the visual cortex was found to be closely coupled with the orienting of visual attention to a sound's location. This neural sign of visual cortex activation was predictive of enhanced perceptual processing and was paralleled by a desynchronization (blocking) of the ongoing occipital alpha rhythm. Further research is needed to determine the nature of the relationship between the slow positive ERP evoked by the sound and the alpha desynchronization and to understand how these electrophysiological processes contribute to improved visual-perceptual processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Infants' visual and auditory communication when a partner is or is not visually attending.
Liszkowski, Ulf; Albrecht, Konstanze; Carpenter, Malinda; Tomasello, Michael
2008-04-01
In the current study we investigated infants' communication in the visual and auditory modalities as a function of the recipient's visual attention. We elicited pointing at interesting events from thirty-two 12-month olds and thirty-two 18-month olds in two conditions: when the recipient either was or was not visually attending to them before and during the point. The main result was that infants initiated more pointing when the recipient's visual attention was on them than when it was not. In addition, when the recipient did not respond by sharing interest in the designated event, infants initiated more repairs (repeated pointing) than when she did, again, especially when the recipient was visually attending to them. Interestingly, accompanying vocalizations were used intentionally and increased in both experimental conditions when the recipient did not share attention and interest. However, there was little evidence that infants used their vocalizations to direct attention to their gestures when the recipient was not attending to them.
Automatic Detection and Classification of Audio Events for Road Surveillance Applications.
Almaadeed, Noor; Asim, Muhammad; Al-Maadeed, Somaya; Bouridane, Ahmed; Beghdadi, Azeddine
2018-06-06
This work investigates the problem of detecting hazardous events on roads by designing an audio surveillance system that automatically detects perilous situations such as car crashes and tire skidding. In recent years, research has shown several visual surveillance systems that have been proposed for road monitoring to detect accidents with an aim to improve safety procedures in emergency cases. However, the visual information alone cannot detect certain events such as car crashes and tire skidding, especially under adverse and visually cluttered weather conditions such as snowfall, rain, and fog. Consequently, the incorporation of microphones and audio event detectors based on audio processing can significantly enhance the detection accuracy of such surveillance systems. This paper proposes to combine time-domain, frequency-domain, and joint time-frequency features extracted from a class of quadratic time-frequency distributions (QTFDs) to detect events on roads through audio analysis and processing. Experiments were carried out using a publicly available dataset. The experimental results conform the effectiveness of the proposed approach for detecting hazardous events on roads as demonstrated by 7% improvement of accuracy rate when compared against methods that use individual temporal and spectral features.
Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration.
Ikumi, Nara; Soto-Faraco, Salvador
2016-01-01
Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands.
Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration
Ikumi, Nara; Soto-Faraco, Salvador
2017-01-01
Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands. PMID:28154529
Dave, Hreem; Phoenix, Vidya; Becker, Edmund R.; Lambert, Scott R.
2015-01-01
OBJECTIVES To compare the incidence of adverse events, visual outcomes and economic costs of sequential versus simultaneous bilateral cataract surgery for infants with congenital cataracts. METHODS We retrospectively reviewed the incidence of adverse events, visual outcomes and medical payments associated with simultaneous versus sequential bilateral cataract surgery for infants with congenital cataracts who underwent cataract surgery when 6 months of age or younger at our institution. RESULTS Records were available for 10 children who underwent sequential surgery at a mean age of 49 days for the first eye and 17 children who underwent simultaneous surgery at a mean age of 68 days (p=.25). We found a similar incidence of adverse events between the two treatment groups. Intraoperative or postoperative complications occurred in 14 eyes. The most common postoperative complication was glaucoma. No eyes developed endophthalmitis. The mean absolute interocular difference in logMAR visual acuities between the two treatment groups was 0.47±0.76 for the sequential group and 0.44±0.40 for the simultaneous group (p=.92). Hospital, drugs, supplies and professional payments were on average 21.9% lower per patient in the simultaneous group. CONCLUSIONS Simultaneous bilateral cataract surgery for infants with congenital cataracts was associated with a 21.9% reduction in medical payments and no discernible difference in the incidence of adverse events or visual outcome. PMID:20697007
[Are Visual Field Defects Reversible? - Visual Rehabilitation with Brains].
Sabel, B A
2017-02-01
Visual field defects are considered irreversible because the retina and optic nerve do not regenerate. Nevertheless, there is some potential for recovery of the visual fields. This can be accomplished by the brain, which analyses and interprets visual information and is able to amplify residual signals through neuroplasticity. Neuroplasticity refers to the ability of the brain to change its own functional architecture by modulating synaptic efficacy. This is actually the neurobiological basis of normal learning. Plasticity is maintained throughout life and can be induced by repetitively stimulating (training) brain circuits. The question now arises as to how plasticity can be utilised to activate residual vision for the treatment of visual field loss. Just as in neurorehabilitation, visual field defects can be modulated by post-lesion plasticity to improve vision in glaucoma, diabetic retinopathy or optic neuropathy. Because almost all patients have some residual vision, the goal is to strengthen residual capacities by enhancing synaptic efficacy. New treatment paradigms have been tested in clinical studies, including vision restoration training and non-invasive alternating current stimulation. While vision training is a behavioural task to selectively stimulate "relative defects" with daily vision exercises for the duration of 6 months, treatment with alternating current stimulation (30 min. daily for 10 days) activates and synchronises the entire retina and brain. Though full restoration of vision is not possible, such treatments improve vision, both subjectively and objectively. This includes visual field enlargements, improved acuity and reaction time, improved orientation and vision related quality of life. About 70 % of the patients respond to the therapies and there are no serious adverse events. Physiological studies of the effect of alternating current stimulation using EEG and fMRI reveal massive local and global changes in the brain. These include local activation of the visual cortex and global reorganisation of neuronal brain networks. Because modulation of neuroplasticity can strengthen residual vision, the brain deserves a better reputation in ophthalmology for its role in visual rehabilitation. For patients, there is now more light at the end of the tunnel, because vision loss in some areas of the visual field defect is indeed reversible. Georg Thieme Verlag KG Stuttgart · New York.
Degrees of Consciousness in the Communication of Actions and Events on the Visual Cliff. No. 58.
ERIC Educational Resources Information Center
Bierschenk, Bernhard
The consciousness of dizygotic twins in their communication of actions and events as seen in the visual cliff pictures published by E. J. Gibson and R. D. Walk (1960) was studied in Sweden. In the process of communication, many different state spaces are generated. The methodology demonstrates that ecological and biophysical properties of language…
ERIC Educational Resources Information Center
Koychev, Ivan; El-Deredy, Wael; Haenschel, Corinna; Deakin, John Francis William
2010-01-01
We aimed to clarify the importance of early visual processing deficits for the formation of cognitive deficits in the schizophrenia spectrum. We carried out an event-related potential (ERP) study using a computerised delayed matching to sample working memory (WM) task on a sample of volunteers with high and low scores on the Schizotypal…
ERIC Educational Resources Information Center
Reynolds, Greg D.; Courage, Mary L.; Richards, John E.
2010-01-01
In this study, we had 3 major goals. The 1st goal was to establish a link between behavioral and event-related potential (ERP) measures of infant attention and recognition memory. To assess the distribution of infant visual preferences throughout ERP testing, we designed a new experimental procedure that embeds a behavioral measure (paired…
Amiodarone-Associated Optic Neuropathy: A Critical Review
Passman, Rod S.; Bennett, Charles L.; Purpura, Joseph M.; Kapur, Rashmi; Johnson, Lenworth N.; Raisch, Dennis W.; West, Dennis P.; Edwards, Beatrice J.; Belknap, Steven M.; Liebling, Dustin B.; Fisher, Mathew J.; Samaras, Athena T.; Jones, Lisa-Gaye A.; Tulas, Katrina-Marie E.; McKoy, June M.
2011-01-01
Although amiodarone is the most commonly prescribed antiarrhythmic drug, its use is limited by serious toxicities, including optic neuropathy. Current reports of amiodarone associated optic neuropathy identified from the Food and Drug Administration's Adverse Event Reporting System (FDA-AERS) and published case reports were reviewed. A total of 296 reports were identified: 214 from AERS, 59 from published case reports, and 23 from adverse events reports for patients enrolled in clinical trials. Mean duration of amiodarone therapy before vision loss was 9 months (range 1-84 months). Insidious onset of amiodarone associated optic neuropathy (44%) was the most common presentation, and nearly one-third were asymptomatic. Optic disc edema was present in 85% of cases. Following drug cessation, 58% had improved visual acuity, 21% were unchanged, and 21% had further decreased visual acuity. Legal blindness (< 20/200) was noted in at least one eye in 20% of cases. Close ophthalmologic surveillance of patients during the tenure of amiodarone administration is warranted. PMID:22385784
Desantis, Andrea; Haggard, Patrick
2016-01-01
To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063
Desantis, Andrea; Haggard, Patrick
2016-12-16
To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events.
Sounds activate visual cortex and improve visual discrimination.
Feng, Wenfeng; Störmer, Viola S; Martinez, Antigona; McDonald, John J; Hillyard, Steven A
2014-07-16
A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. Copyright © 2014 the authors 0270-6474/14/349817-08$15.00/0.
Finding the Correspondence of Audio-Visual Events by Object Manipulation
NASA Astrophysics Data System (ADS)
Nishibori, Kento; Takeuchi, Yoshinori; Matsumoto, Tetsuya; Kudo, Hiroaki; Ohnishi, Noboru
A human being understands the objects in the environment by integrating information obtained by the senses of sight, hearing and touch. In this integration, active manipulation of objects plays an important role. We propose a method for finding the correspondence of audio-visual events by manipulating an object. The method uses the general grouping rules in Gestalt psychology, i.e. “simultaneity” and “similarity” among motion command, sound onsets and motion of the object in images. In experiments, we used a microphone, a camera, and a robot which has a hand manipulator. The robot grasps an object like a bell and shakes it or grasps an object like a stick and beat a drum in a periodic, or non-periodic motion. Then the object emits periodical/non-periodical events. To create more realistic scenario, we put other event source (a metronome) in the environment. As a result, we had a success rate of 73.8 percent in finding the correspondence between audio-visual events (afferent signal) which are relating to robot motion (efferent signal).
VAP/VAT: video analytics platform and test bed for testing and deploying video analytics
NASA Astrophysics Data System (ADS)
Gorodnichy, Dmitry O.; Dubrofsky, Elan
2010-04-01
Deploying Video Analytics in operational environments is extremely challenging. This paper presents a methodological approach developed by the Video Surveillance and Biometrics Section (VSB) of the Science and Engineering Directorate (S&E) of the Canada Border Services Agency (CBSA) to resolve these problems. A three-phase approach to enable VA deployment within an operational agency is presented and the Video Analytics Platform and Testbed (VAP/VAT) developed by the VSB section is introduced. In addition to allowing the integration of third party and in-house built VA codes into an existing video surveillance infrastructure, VAP/VAT also allows the agency to conduct an unbiased performance evaluation of the cameras and VA software available on the market. VAP/VAT consists of two components: EventCapture, which serves to Automatically detect a "Visual Event", and EventBrowser, which serves to Display & Peruse of "Visual Details" captured at the "Visual Event". To deal with Open architecture as well as with Closed architecture cameras, two video-feed capture mechanisms have been developed within the EventCapture component: IPCamCapture and ScreenCapture.
Sequential pattern data mining and visualization
Wong, Pak Chung [Richland, WA; Jurrus, Elizabeth R [Kennewick, WA; Cowley, Wendy E [Benton City, WA; Foote, Harlan P [Richland, WA; Thomas, James J [Richland, WA
2011-12-06
One or more processors (22) are operated to extract a number of different event identifiers therefrom. These processors (22) are further operable to determine a number a display locations each representative of one of the different identifiers and a corresponding time. The display locations are grouped into sets each corresponding to a different one of several event sequences (330a, 330b, 330c. 330d, 330e). An output is generated corresponding to a visualization (320) of the event sequences (330a, 330b, 330c, 330d, 330e).
Sequential pattern data mining and visualization
Wong, Pak Chung [Richland, WA; Jurrus, Elizabeth R [Kennewick, WA; Cowley, Wendy E [Benton City, WA; Foote, Harlan P [Richland, WA; Thomas, James J [Richland, WA
2009-05-26
One or more processors (22) are operated to extract a number of different event identifiers therefrom. These processors (22) are further operable to determine a number a display locations each representative of one of the different identifiers and a corresponding time. The display locations are grouped into sets each corresponding to a different one of several event sequences (330a, 330b, 330c. 330d, 330e). An output is generated corresponding to a visualization (320) of the event sequences (330a, 330b, 330c, 330d, 330e).
Osaka, Naoyuki; Matsuyoshi, Daisuke; Ikeda, Takashi; Osaka, Mariko
2010-03-10
The recent development of cognitive neuroscience has invited inference about the neurosensory events underlying the experience of visual arts involving implied motion. We report functional magnetic resonance imaging study demonstrating activation of the human extrastriate motion-sensitive cortex by static images showing implied motion because of instability. We used static line-drawing cartoons of humans by Hokusai Katsushika (called 'Hokusai Manga'), an outstanding Japanese cartoonist as well as famous Ukiyoe artist. We found 'Hokusai Manga' with implied motion by depicting human bodies that are engaged in challenging tonic posture significantly activated the motion-sensitive visual cortex including MT+ in the human extrastriate cortex, while an illustration that does not imply motion, for either humans or objects, did not activate these areas under the same tasks. We conclude that motion-sensitive extrastriate cortex would be a critical region for perception of implied motion in instability.
The role of temporal structure in human vision.
Blake, Randolph; Lee, Sang-Hun
2005-03-01
Gestalt psychologists identified several stimulus properties thought to underlie visual grouping and figure/ground segmentation, and among those properties was common fate: the tendency to group together individual objects that move together in the same direction at the same speed. Recent years have witnessed an upsurge of interest in visual grouping based on other time-dependent sources of visual information, including synchronized changes in luminance, in motion direction, and in figure/ ground relations. These various sources of temporal grouping information can be subsumed under the rubric temporal structure. In this article, the authors review evidence bearing on the effectiveness of temporal structure in visual grouping. They start with an overview of evidence bearing on temporal acuity of human vision, covering studies dealing with temporal integration and temporal differentiation. They then summarize psychophysical studies dealing with figure/ground segregation based on temporal phase differences in deterministic and stochastic events. The authors conclude with a brief discussion of neurophysiological implications of these results.
MassImager: A software for interactive and in-depth analysis of mass spectrometry imaging data.
He, Jiuming; Huang, Luojiao; Tian, Runtao; Li, Tiegang; Sun, Chenglong; Song, Xiaowei; Lv, Yiwei; Luo, Zhigang; Li, Xin; Abliz, Zeper
2018-07-26
Mass spectrometry imaging (MSI) has become a powerful tool to probe molecule events in biological tissue. However, it is a widely held viewpoint that one of the biggest challenges is an easy-to-use data processing software for discovering the underlying biological information from complicated and huge MSI dataset. Here, a user-friendly and full-featured MSI software including three subsystems, Solution, Visualization and Intelligence, named MassImager, is developed focusing on interactive visualization, in-situ biomarker discovery and artificial intelligent pathological diagnosis. Simplified data preprocessing and high-throughput MSI data exchange, serialization jointly guarantee the quick reconstruction of ion image and rapid analysis of dozens of gigabytes datasets. It also offers diverse self-defined operations for visual processing, including multiple ion visualization, multiple channel superposition, image normalization, visual resolution enhancement and image filter. Regions-of-interest analysis can be performed precisely through the interactive visualization between the ion images and mass spectra, also the overlaid optical image guide, to directly find out the region-specific biomarkers. Moreover, automatic pattern recognition can be achieved immediately upon the supervised or unsupervised multivariate statistical modeling. Clear discrimination between cancer tissue and adjacent tissue within a MSI dataset can be seen in the generated pattern image, which shows great potential in visually in-situ biomarker discovery and artificial intelligent pathological diagnosis of cancer. All the features are integrated together in MassImager to provide a deep MSI processing solution at the in-situ metabolomics level for biomarker discovery and future clinical pathological diagnosis. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Rowe, F J; Conroy, E J; Bedson, E; Cwiklinski, E; Drummond, A; García-Fiñana, M; Howard, C; Pollock, A; Shipman, T; Dodridge, C; MacIntosh, C; Johnson, S; Noonan, C; Barton, G; Sackley, C
2017-10-01
Pilot trial to compare prism therapy and visual search training, for homonymous hemianopia, to standard care (information only). Prospective, multicentre, parallel, single-blind, three-arm RCT across fifteen UK acute stroke units. Stroke survivors with homonymous hemianopia. Arm a (Fresnel prisms) for minimum 2 hours, 5 days per week over 6 weeks. Arm b (visual search training) for minimum 30 minutes, 5 days per week over 6 weeks. Arm c (standard care-information only). Adult stroke survivors (>18 years), stable hemianopia, visual acuity better than 0.5 logMAR, refractive error within ±5 dioptres, ability to read/understand English and provide consent. Primary outcomes were change in visual field area from baseline to 26 weeks and calculation of sample size for a definitive trial. Secondary measures included Rivermead Mobility Index, Visual Function Questionnaire 25/10, Nottingham Extended Activities of Daily Living, Euro Qual, Short Form-12 questionnaires and Radner reading ability. Measures were post-randomization at baseline and 6, 12 and 26 weeks. Randomization block lists stratified by site and partial/complete hemianopia. Allocations disclosed to patients. Primary outcome assessor blind to treatment allocation. Eighty-seven patients were recruited: 27-Fresnel prisms, 30-visual search training and 30-standard care; 69% male; mean age 69 years (SD 12). At 26 weeks, full results for 24, 24 and 22 patients, respectively, were compared to baseline. Sample size calculation for a definitive trial determined as 269 participants per arm for a 200 degree 2 visual field area change at 90% power. Non-significant relative change in area of visual field was 5%, 8% and 3.5%, respectively, for the three groups. Visual Function Questionnaire responses improved significantly from baseline to 26 weeks with visual search training (60 [SD 19] to 68.4 [SD 20]) compared to Fresnel prisms (68.5 [SD 16.4] to 68.2 [18.4]: 7% difference) and standard care (63.7 [SD 19.4] to 59.8 [SD 22.7]: 10% difference), P=.05. Related adverse events were common with Fresnel prisms (69.2%; typically headaches). No significant change occurred for area of visual field area across arms over follow-up. Visual search training had significant improvement in vision-related quality of life. Prism therapy produced adverse events in 69%. Visual search training results warrant further investigation. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Lewkowicz, David J.
2003-01-01
Three experiments examined 4- to 10-month-olds' perception of audio-visual (A-V) temporal synchrony cues in the presence or absence of rhythmic pattern cues. Results established that infants of all ages could discriminate between two different audio-visual rhythmic events. Only 10-month-olds detected a desynchronization of the auditory and visual…
Visualization Techniques for Computer Network Defense
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beaver, Justin M; Steed, Chad A; Patton, Robert M
2011-01-01
Effective visual analysis of computer network defense (CND) information is challenging due to the volume and complexity of both the raw and analyzed network data. A typical CND is comprised of multiple niche intrusion detection tools, each of which performs network data analysis and produces a unique alerting output. The state-of-the-practice in the situational awareness of CND data is the prevalent use of custom-developed scripts by Information Technology (IT) professionals to retrieve, organize, and understand potential threat events. We propose a new visual analytics framework, called the Oak Ridge Cyber Analytics (ORCA) system, for CND data that allows an operatormore » to interact with all detection tool outputs simultaneously. Aggregated alert events are presented in multiple coordinated views with timeline, cluster, and swarm model analysis displays. These displays are complemented with both supervised and semi-supervised machine learning classifiers. The intent of the visual analytics framework is to improve CND situational awareness, to enable an analyst to quickly navigate and analyze thousands of detected events, and to combine sophisticated data analysis techniques with interactive visualization such that patterns of anomalous activities may be more easily identified and investigated.« less
Cognitive processing of visual images in migraine populations in between headache attacks.
Mickleborough, Marla J S; Chapman, Christine M; Toma, Andreea S; Handy, Todd C
2014-09-25
People with migraine headache have altered interictal visual sensory-level processing in between headache attacks. Here we examined the extent to which these migraine abnormalities may extend into higher visual processing such as implicit evaluative analysis of visual images in between migraine events. Specifically, we asked two groups of participants--migraineurs (N=29) and non-migraine controls (N=29)--to view a set of unfamiliar commercial logos in the context of a target identification task as the brain electrical responses to these objects were recorded via event-related potentials (ERPs). Following this task, participants individually identified those logos that they most liked or disliked. We applied a between-groups comparison of how ERP responses to logos varied as a function of hedonic evaluation. Our results suggest migraineurs have abnormal implicit evaluative processing of visual stimuli. Specifically, migraineurs lacked a bias for disliked logos found in control subjects, as measured via a late positive potential (LPP) ERP component. These results suggest post-sensory consequences of migraine in between headache events, specifically abnormal cognitive evaluative processing with a lack of normal categorical hedonic evaluation. Copyright © 2014 Elsevier B.V. All rights reserved.
Immersive Visual Data Analysis For Geoscience Using Commodity VR Hardware
NASA Astrophysics Data System (ADS)
Kreylos, O.; Kellogg, L. H.
2017-12-01
Immersive visualization using virtual reality (VR) display technology offers tremendous benefits for the visual analysis of complex three-dimensional data like those commonly obtained from geophysical and geological observations and models. Unlike "traditional" visualization, which has to project 3D data onto a 2D screen for display, VR can side-step this projection and display 3D data directly, in a pseudo-holographic (head-tracked stereoscopic) form, and does therefore not suffer the distortions of relative positions, sizes, distances, and angles that are inherent in 2D projection. As a result, researchers can apply their spatial reasoning skills to virtual data in the same way they can to real objects or environments. The UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES, http://keckcaves.org) has been developing VR methods for data analysis since 2005, but the high cost of VR displays has been preventing large-scale deployment and adoption of KeckCAVES technology. The recent emergence of high-quality commodity VR, spearheaded by the Oculus Rift and HTC Vive, has fundamentally changed the field. With KeckCAVES' foundational VR operating system, Vrui, now running natively on the HTC Vive, all KeckCAVES visualization software, including 3D Visualizer, LiDAR Viewer, Crusta, Nanotech Construction Kit, and ProtoShop, are now available to small labs, single researchers, and even home users. LiDAR Viewer and Crusta have been used for rapid response to geologic events including earthquakes and landslides, to visualize the impacts of sealevel rise, to investigate reconstructed paleooceanographic masses, and for exploration of the surface of Mars. The Nanotech Construction Kit is being used to explore the phases of carbon in Earth's deep interior, while ProtoShop can be used to construct and investigate protein structures.
Kellenbach, Marion L; Wijers, Albertus A; Hovius, Marjolijn; Mulder, Juul; Mulder, Gijsbertus
2002-05-15
Event-related potentials (ERPs) were used to investigate whether processing differences between nouns and verbs can be accounted for by the differential salience of visual-perceptual and motor attributes in their semantic specifications. Three subclasses of nouns and verbs were selected, which differed in their semantic attribute composition (abstract, high visual, high visual and motor). Single visual word presentation with a recognition memory task was used. While multiple robust and parallel ERP effects were observed for both grammatical class and attribute type, there were no interactions between these. This pattern of effects provides support for lexical-semantic knowledge being organized in a manner that takes account both of category-based (grammatical class) and attribute-based distinctions.
Saur, Randi; Hansen, Marianne Bang; Jansen, Anne; Heir, Trond
2017-04-01
To explore the types of risks and hazards that visually impaired individuals face, how they manage potential threats and how reactions to traumatic events are manifested and coped with. Participants were 17 visually impaired individuals who had experienced some kind of potentially traumatic event. Two focus groups and 13 individual interviews were conducted. The participants experienced a variety of hazards and potential threats in their daily life. Fear of daily accidents was more pronounced than fear of disasters. Some participants reported avoiding help-seeking in unsafe situations due to shame at not being able to cope. The ability to be independent was highlighted. Traumatic events were re-experienced through a variety of sense modalities. Fear of labelling and avoidance of potential risks were recurring topics, and the risks of social withdrawal and isolation were addressed. Visual impairment causes a need for predictability and adequate information to increase and prepare for coping and self-efficacy. The results from this study call for greater emphasis on universal design in order to ensure safety and predictability. Fear of being labelled may inhibit people from using assistive devices and adequate coping strategies and seeking professional help in the aftermath of a trauma. Implications for Rehabilitation Visual impairment entails a greater susceptibility to a variety of hazards and potential threats in daily life. This calls for a greater emphasis on universal design in public spaces to ensure confidence and safety. Visual impairment implies a need for predictability and adequate information to prepare for coping and self-efficacy. Rehabilitation professionals should be aware of the need for independence and self-reliance, the possible fear of labelling, avoidance of help-seeking or reluctance to use assistive devices. In rehabilitation after accidents or potential traumatizing events, professionals' knowledge about the needs for information, training and predictability is crucial. The possibility of social withdrawal or isolation should be considered.
Multi-viewpoint Coronal Mass Ejection Catalog Based on STEREO COR2 Observations
NASA Astrophysics Data System (ADS)
Vourlidas, Angelos; Balmaceda, Laura A.; Stenborg, Guillermo; Dal Lago, Alisson
2017-04-01
We present the first multi-viewpoint coronal mass ejection (CME) catalog. The events are identified visually in simultaneous total brightness observations from the twin SECCHI/COR2 coronagraphs on board the Solar Terrestrial Relations Observatory mission. The Multi-View CME Catalog differs from past catalogs in three key aspects: (1) all events between the two viewpoints are cross-linked, (2) each event is assigned a physics-motivated morphological classification (e.g., jet, wave, and flux rope), and (3) kinematic and geometric information is extracted semi-automatically via a supervised image segmentation algorithm. The database extends from the beginning of the COR2 synoptic program (2007 March) to the end of dual-viewpoint observations (2014 September). It contains 4473 unique events with 3358 events identified in both COR2s. Kinematic properties exist currently for 1747 events (26% of COR2-A events and 17% of COR2-B events). We examine several issues, made possible by this cross-linked CME database, including the role of projection on the perceived morphology of events, the missing CME rate, the existence of cool material in CMEs, the solar cycle dependence on CME rate, speeds and width, and the existence of flux rope within CMEs. We discuss the implications for past single-viewpoint studies and for Space Weather research. The database is publicly available on the web including all available measurements. We hope that it will become a useful resource for the community.
Distortions of Subjective Time Perception Within and Across Senses
van Wassenhove, Virginie; Buonomano, Dean V.; Shimojo, Shinsuke; Shams, Ladan
2008-01-01
Background The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood. Methodology/Findings We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations. Conclusions/Significance These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions. PMID:18197248
Farnum, C E; Turgai, J; Wilsman, N J
1990-09-01
The functional unit within the growth plate consists of a column of chondrocytes that passes through a sequence of phases including proliferation, hypertrophy, and death. It is important to our understanding of the biology of the growth plate to determine if distal hypertrophic cells are viable, highly differentiated cells with the potential of actively controlling terminal events of endochondral ossification prior to their death at the chondro-osseous junction. This study for the first time reports on the visualization of living hypertrophic chondrocytes in situ, including the terminal hypertrophic chondrocyte. Chondrocytes in growth plate explants are visualized using rectified differential interference contrast microscopy. We record and measure, using time-lapse cinematography, the rate of movement of subcellular organelles at the limit of resolution of this light microscopy system. Control experiments to assess viability of hypertrophic chondrocytes include coincubating organ cultures with the intravital dye fluorescein diacetate to assess the integrity of the plasma membrane and cytoplasmic esterases. In this system, all hypertrophic chondrocytes, including the very terminal chondrocyte, exist as rounded, fully hydrated cells. By the criteria of intravital dye staining and organelle movement, distal hypertrophic chondrocytes are identical to chondrocytes in the proliferative and early hypertrophic cell zones.
Centralized Alert-Processing and Asset Planning for Sensorwebs
NASA Technical Reports Server (NTRS)
Castano, Rebecca; Chien, Steve A.; Rabideau, Gregg R.; Tang, Benyang
2010-01-01
A software program provides a Sensorweb architecture for alert-processing, event detection, asset allocation and planning, and visualization. It automatically tasks and re-tasks various types of assets such as satellites and robotic vehicles in response to alerts (fire, weather) extracted from various data sources, including low-level Webcam data. JPL has adapted cons iderable Sensorweb infrastructure that had been previously applied to NASA Earth Science applications. This NASA Earth Science Sensorweb has been in operational use since 2003, and has proven reliability of the Sensorweb technologies for robust event detection and autonomous response using space and ground assets. Unique features of the software include flexibility to a range of detection and tasking methods including those that require aggregation of data over spatial and temporal ranges, generality of the response structure to represent and implement a range of response campaigns, and the ability to respond rapidly.
Description of the TCERT Vetting Reports for Data Release 25
NASA Technical Reports Server (NTRS)
Coughlin, Jeffrey L,
2017-01-01
The Q1Q17 DR25 TCERT Vetting Reports are a collection of plots and diagnostics used by the Threshold Crossing Event Review Team (TCERT) to evaluate threshold crossing events (TCEs). While designation of Kepler Objects of Interest (KOIs) and classification of them as Planet Candidates (PCs) or False Positives (FPs) is completely automated via a robotic vetting procedure (the Robovetter) for the Q1Q17 DR25 planet catalog, as described in Thompson et al. (2017), these reports help to visualize the metrics used by the Robovetter and evaluate those robotic decisions for individual objects. For each Q1Q17 DR25 TCE, these reports include the following products: (a) the DV one-page summary, (b) selected pertinent diagnostics and plots from the full DV report, and (c) additional plots and diagnostics not included in the full DV report, including an alternate means of data detrending.
Visualizing Tensions in an Ethnographic Moment: Images and Intersubjectivity.
Crowder, Jerome W
2017-01-01
Images function as sources of data and influence our thinking about fieldwork, representation, and intersubjectivity. In this article, I show how both the ethnographic relationships and the working method of photography lead to a more nuanced understanding of a healing event. I systematically analyze 33 photographs made over a 15-minute period during the preparation and application of a poultice (topical cure) in a rural Andean home. The images chronicle the event, revealing my initial reaction and the decisions I made when tripping the shutter. By unpacking the relationship between ethnographer and subject, I reveal the constant negotiation of positions, assumptions, and expectations that make up intersubjectivity. For transparency, I provide thumbnails of all images, including metadata, so that readers may consider alternative interpretations of the images and event.
Sustainable Model for Public Health Emergency Operations Centers for Global Settings.
Balajee, S Arunmozhi; Pasi, Omer G; Etoundi, Alain Georges M; Rzeszotarski, Peter; Do, Trang T; Hennessee, Ian; Merali, Sharifa; Alroy, Karen A; Phu, Tran Dac; Mounts, Anthony W
2017-10-01
Capacity to receive, verify, analyze, assess, and investigate public health events is essential for epidemic intelligence. Public health Emergency Operations Centers (PHEOCs) can be epidemic intelligence hubs by 1) having the capacity to receive, analyze, and visualize multiple data streams, including surveillance and 2) maintaining a trained workforce that can analyze and interpret data from real-time emerging events. Such PHEOCs could be physically located within a ministry of health epidemiology, surveillance, or equivalent department rather than exist as a stand-alone space and serve as operational hubs during nonoutbreak times but in emergencies can scale up according to the traditional Incident Command System structure.
Thermal wake/vessel detection technique
Roskovensky, John K [Albuquerque, NM; Nandy, Prabal [Albuquerque, NM; Post, Brian N [Albuquerque, NM
2012-01-10
A computer-automated method for detecting a vessel in water based on an image of a portion of Earth includes generating a thermal anomaly mask. The thermal anomaly mask flags each pixel of the image initially deemed to be a wake pixel based on a comparison of a thermal value of each pixel against other thermal values of other pixels localized about each pixel. Contiguous pixels flagged by the thermal anomaly mask are grouped into pixel clusters. A shape of each of the pixel clusters is analyzed to determine whether each of the pixel clusters represents a possible vessel detection event. The possible vessel detection events are represented visually within the image.
Declarative and nondeclarative memory: multiple brain systems supporting learning and memory.
Squire, L R
1992-01-01
Abstract The topic of multiple forms of memory is considered from a biological point of view. Fact-and-event (declarative, explicit) memory is contrasted with a collection of non conscious (non-declarative, implicit) memory abilities including skills and habits, priming, and simple conditioning. Recent evidence is reviewed indicating that declarative and non declarative forms of memory have different operating characteristics and depend on separate brain systems. A brain-systems framework for understanding memory phenomena is developed in light of lesion studies involving rats, monkeys, and humans, as well as recent studies with normal humans using the divided visual field technique, event-related potentials, and positron emission tomography (PET).
Developing, deploying and reflecting on a web-based geologic simulation tool
NASA Astrophysics Data System (ADS)
Cockett, R.
2015-12-01
Geoscience is visual. It requires geoscientists to think and communicate about processes and events in three spatial dimensions and variations through time. This is hard(!), and students often have difficulty when learning and visualizing the three dimensional and temporal concepts. Visible Geology is an online geologic block modelling tool that is targeted at students in introductory and structural geology. With Visible Geology, students are able to combine geologic events in any order to create their own geologic models and ask 'what-if' questions, as well as interrogate their models using cross sections, boreholes and depth slices. Instructors use it as a simulation and communication tool in demonstrations, and students use it to explore concepts of relative geologic time, structural relationships, as well as visualize abstract geologic representations such as stereonets. The level of interactivity and creativity inherent in Visible Geology often results in a sense of ownership and encourages engagement, leading learners to practice visualization and interpretation skills and discover geologic relationships. Through its development over the last five years, Visible Geology has been used by over 300K students worldwide as well as in multiple targeted studies at the University of Calgary and at the University of British Columbia. The ease of use of the software has made this tool practical for deployment in classrooms of any size as well as for individual use. In this presentation, I will discuss the thoughts behind the implementation and layout of the tool, including a framework used for the development and design of new educational simulations. I will also share some of the surprising and unexpected observations on student interaction with the 3D visualizations, and other insights that are enabled by web-based development and deployment.
A Distributed Compressive Sensing Scheme for Event Capture in Wireless Visual Sensor Networks
NASA Astrophysics Data System (ADS)
Hou, Meng; Xu, Sen; Wu, Weiling; Lin, Fei
2018-01-01
Image signals which acquired by wireless visual sensor network can be used for specific event capture. This event capture is realized by image processing at the sink node. A distributed compressive sensing scheme is used for the transmission of these image signals from the camera nodes to the sink node. A measurement and joint reconstruction algorithm for these image signals are proposed in this paper. Make advantage of spatial correlation between images within a sensing area, the cluster head node which as the image decoder can accurately co-reconstruct these image signals. The subjective visual quality and the reconstruction error rate are used for the evaluation of reconstructed image quality. Simulation results show that the joint reconstruction algorithm achieves higher image quality at the same image compressive rate than the independent reconstruction algorithm.
Dave, Hreem; Phoenix, Vidya; Becker, Edmund R; Lambert, Scott R
2010-08-01
To compare the incidence of adverse events and visual outcomes and to compare the economic costs of sequential vs simultaneous bilateral cataract surgery for infants with congenital cataracts. Retrospective review of simultaneous vs sequential bilateral cataract surgery for infants with congenital cataracts who underwent cataract surgery when 6 months or younger at our institution. Records were available for 10 children who underwent sequential surgery at a mean age of 49 days for the first eye and 17 children who underwent simultaneous surgery at a mean age of 68 days (P = .25). We found a similar incidence of adverse events between the 2 treatment groups. Intraoperative or postoperative complications occurred in 14 eyes. The most common postoperative complication was glaucoma. No eyes developed endophthalmitis. The mean (SD) absolute interocular difference in logMAR visual acuities between the 2 treatment groups was 0.47 (0.76) for the sequential group and 0.44 (0.40) for the simultaneous group (P = .92). Payments for the hospital, drugs, supplies, and professional services were on average 21.9% lower per patient in the simultaneous group. Simultaneous bilateral cataract surgery for infants with congenital cataracts is associated with a 21.9% reduction in medical payments and no discernible difference in the incidence of adverse events or visual outcomes. However, our small sample size limits our ability to make meaningful comparisons of the relative risks and visual benefits of the 2 procedures.
Action starring narratives and events: Structure and inference in visual narrative comprehension
Cohn, Neil; Wittenberg, Eva
2015-01-01
Studies of discourse have long placed focus on the inference generated by information that is not overtly expressed, and theories of visual narrative comprehension similarly focused on the inference generated between juxtaposed panels. Within the visual language of comics, star-shaped “flashes” commonly signify impacts, but can be enlarged to the size of a whole panel that can omit all other representational information. These “action star” panels depict a narrative culmination (a “Peak”), but have content which readers must infer, thereby posing a challenge to theories of inference generation in visual narratives that focus only on the semantic changes between juxtaposed images. This paper shows that action stars demand more inference than depicted events, and that they are more coherent in narrative sequences than scrambled sequences (Experiment 1). In addition, action stars play a felicitous narrative role in the sequence (Experiment 2). Together, these results suggest that visual narratives use conventionalized depictions that demand the generation of inferences while retaining narrative coherence of a visual sequence. PMID:26709362
Action starring narratives and events: Structure and inference in visual narrative comprehension.
Cohn, Neil; Wittenberg, Eva
Studies of discourse have long placed focus on the inference generated by information that is not overtly expressed, and theories of visual narrative comprehension similarly focused on the inference generated between juxtaposed panels. Within the visual language of comics, star-shaped "flashes" commonly signify impacts, but can be enlarged to the size of a whole panel that can omit all other representational information. These "action star" panels depict a narrative culmination (a "Peak"), but have content which readers must infer, thereby posing a challenge to theories of inference generation in visual narratives that focus only on the semantic changes between juxtaposed images. This paper shows that action stars demand more inference than depicted events, and that they are more coherent in narrative sequences than scrambled sequences (Experiment 1). In addition, action stars play a felicitous narrative role in the sequence (Experiment 2). Together, these results suggest that visual narratives use conventionalized depictions that demand the generation of inferences while retaining narrative coherence of a visual sequence.
Temporal Structure and Complexity Affect Audio-Visual Correspondence Detection
Denison, Rachel N.; Driver, Jon; Ruff, Christian C.
2013-01-01
Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration. PMID:23346067
Shirane, Seiko; Inagaki, Masumi; Sata, Yoshimi; Kaga, Makiko
2004-07-01
In order to evaluate visual perception, the P300 event-related potentials (ERPs) for visual oddball tasks were recorded in 11 patients with attention deficit/hyperactivity disorders (AD/HD), 12 with mental retardation (MR) and 14 age-matched healthy controls. With the aim of revealing trial-to-trial variabilities which are neglected by investigating averaged ERPs, single sweep P300s (ss-P300s) were assessed in addition to averaged P300. There were no significant differences of averaged P300 latency and amplitude between controls and AD/HD patients. AD/HD patients showed an increased variability in the amplitude of ss-P300s, while MR patient showed an increased variability in latency. These findings suggest that in AD/HD patients general attention is impaired to a larger extent than selective attention and visual perception.
The behavioral context of visual displays in common marmosets (Callithrix jacchus).
de Boer, Raïssa A; Overduin-de Vries, Anne M; Louwerse, Annet L; Sterck, Elisabeth H M
2013-11-01
Communication is important in social species, and may occur with the use of visual, olfactory or auditory signals. However, visual communication may be hampered in species that are arboreal have elaborate facial coloring and live in small groups. The common marmoset fits these criteria and may have limited visual communication. Nonetheless, some (contradictive) propositions concerning visual displays in the common marmoset have been made, yet quantitative data are lacking. The aim of this study was to assign a behavioral context to different visual displays using pre-post-event-analyses. Focal observations were conducted on 16 captive adult and sub-adult marmosets in three different family groups. Based on behavioral elements with an unambiguous meaning, four different behavioral contexts were distinguished: aggression, fear, affiliation, and play behavior. Visual displays concerned behavior that included facial expressions, body postures, and pilo-erection of the fur. Visual displays related to aggression, fear, and play/affiliation were consistent with the literature. We propose that the visual display "pilo-erection tip of tail" is related to fear. Individuals receiving these fear signals showed a higher rate of affiliative behavior. This study indicates that several visual displays may provide cues or signals of particular social contexts. Since the three displays of fear elicited an affiliative response, they may communicate a request of anxiety reduction or signal an external referent. Concluding, common marmosets, despite being arboreal and living in small groups, use several visual displays to communicate with conspecifics and their facial coloration may not hamper, but actually promote the visibility of visual displays. © 2013 Wiley Periodicals, Inc.
Self-Imagining Enhances Recognition Memory in Memory-Impaired Individuals with Neurological Damage
Grilli, Matthew D.; Glisky, Elizabeth L.
2010-01-01
Objective The ability to imagine an elaborative event from a personal perspective relies on a number of cognitive processes that may potentially enhance subsequent memory for the event, including visual imagery, semantic elaboration, emotional processing, and self-referential processing. In an effort to find a novel strategy for enhancing memory in memory-impaired individuals with neurological damage, the present study investigated the mnemonic benefit of a method we refer to as “self-imagining” – or the imagining of an event from a realistic, personal perspective. Method Fourteen individuals with neurologically-based memory deficits and fourteen healthy control participants intentionally encoded neutral and emotional sentences under three instructions: structural-baseline processing, semantic processing, and self-imagining. Results Findings revealed a robust “self-imagination effect” as self-imagination enhanced recognition memory relative to deep semantic elaboration in both memory-impaired individuals, F (1, 13) = 32.11, p < .001, η2 = .71, and healthy controls, F (1, 13) = 5.57, p < .05, η2 = .30. In addition, results indicated that mnemonic benefits of self-imagination were not limited by severity of the memory disorder nor were they related to self-reported vividness of visual imagery, semantic processing, or emotional content of the materials. Conclusions The findings suggest that the self-imagination effect may depend on unique mnemonic mechanisms possibly related to self-referential processing, and that imagining an event from a personal perspective makes that event particularly memorable even for those individuals with severe memory deficits. Self-imagining may thus provide an effective rehabilitation strategy for individuals with memory impairment. PMID:20873930
Tsertsvadze, Alexander; Yazdi, Fatemeh; Fink, Howard A; MacDonald, Roderick; Wilt, Timothy J; Bella, Anthony J; Ansari, Mohammed T; Garritty, Chantelle; Soares-Weiser, Karla; Daniel, Raymond; Sampson, Margaret; Moher, David
2009-10-01
To summarize and compare evidence on harms in sildenafil- and placebo-treated men with erectile dysfunction (ED) in a systematic review and meta-analysis. Randomized placebo-controlled trials (RCTs) were identified using an electronic search in MEDLINE, EMBASE, PsycINFO, SCOPUS, and Cochrane CENTRAL. The rates of any adverse events (AEs), most commonly reported AEs, withdrawals because of adverse events, and serious adverse events were ascertained and compared between sildenafil and placebo groups. The results of men with ED were stratified by clinical condition(s). Statistical heterogeneity was explored. Meta-analyses based on random-effects model were also performed. A total of 49 RCTs were included. Sildenafil-treated men had a higher risk for all-cause AEs (RR = 1.56, 95% CI: 1.38, 1.76), headache, flushing, dyspepsia, and visual disturbances compared with placebo-treated men. The magnitude of excess risk was greater in fixed- than in flexible-dose trials. The rates of serious adverse events and withdrawals because of adverse events did not differ in sildenafil vs placebo groups. A higher dose of sildenafil corresponded to a greater risk of AEs. The increased risk of harms was observed within and across clinically defined specific groups of patients. There was a lack of RCTs reporting long-term (>6 months) harms data. In short-term trials, men with ED randomized to sildenafil had an increased risk of all-cause any AEs, headache, flushing, dyspepsia, and visual disturbances. The exploration of different modes of dose optimization of sildenafil may be warranted.
Nonvisualization of the ovaries on pelvic ultrasound: does MRI add anything?
Lisanti, Christopher J; Wood, Jonathan R; Schwope, Ryan B
2014-02-01
The purpose of our study is to assess the utility of pelvic magnetic resonance imaging (MRI) in the event that either one or both ovaries are not visualized by pelvic ultrasound. This HIPAA-compliant retrospective study was approved by our local institutional review board and informed consent waived. 1926 pelvic MRI examinations between March 2007 and December 2011 were reviewed and included if a combined transabdominal and endovaginal pelvic ultrasound had been performed in the preceding 6 months with at least one ovary nonvisualized. Ovaries not visualized on pelvic ultrasound were assumed to be normal and compared with the pelvic MRI findings. MRI findings were categorized as concordant or discordant. Discordant findings were divided into malignant, non-malignant physiologic or non-malignant non-physiologic. The modified Wald, the "rule of thirds", and the binomial distribution probability tests were performed. 255 pelvic ultrasounds met inclusion criteria with 364 ovaries not visualized. 0 malignancies were detected on MRI. 6.9% (25/364) of nonvisualized ovaries had non-malignant discordant findings on MRI: 5.2% (19/364) physiologic, 1.6% (6/364) non-physiologic. Physiologic findings included: 16 functional cysts and 3 hemorrhagic cysts. Non-physiologic findings included: 3 cysts in post-menopausal women, 1 hydrosalpinx, and 2 broad ligament fibroids. The theoretical risk of detecting an ovarian carcinoma on pelvic MRI when an ovary is not visualized on ultrasound ranges from 0 to 1.3%. If an ovary is not visualized on pelvic ultrasound, it can be assumed to be without carcinoma and MRI rarely adds additional information.
Inhibition of return shortens perceived duration of a brief visual event.
Osugi, Takayuki; Takeda, Yuji; Murakami, Ikuya
2016-11-01
We investigated the influence of attentional inhibition on the perceived duration of a brief visual event. Although attentional capture by an exogenous cue is known to prolong the perceived duration of an attended visual event, it remains unclear whether time perception is also affected by subsequent attentional inhibition at the location previously cued by an exogenous cue, an attentional phenomenon known as inhibition of return. In this study, we combined spatial cuing and duration judgment. After one second from the appearance of an uninformative peripheral cue either to the left or to the right, a target appeared at a cued side in one-third of the trials, which indeed yielded inhibition of return, and at the opposite side in another one-third of the trials. In the remaining trials, a cue appeared at a central box and one second later, a target appeared at either the left or right side. The target at the previously cued location was perceived to last shorter than the target presented at the opposite location, and shorter than the target presented after the central cue presentation. Therefore, attentional inhibition produced by a classical paradigm of inhibition of return decreased the perceived duration of a brief visual event. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Wilkinson, Krista M.; Light, Janice
2011-01-01
Purpose: Many individuals with complex communication needs may benefit from visual aided augmentative and alternative communication systems. In visual scene displays (VSDs), language concepts are embedded into a photograph of a naturalistic event. Humans play a central role in communication development and might be important elements in VSDs.…
Sata, Yoshimi; Inagaki, Masumi; Shirane, Seiko; Kaga, Makiko
2002-11-01
In order to objectively evaluate visual perception of patients with mental retardation (MR), the P300 event-related potentials (ERPs) for visual oddball tasks were recorded in 26 patients and 13 age-matched healthy volunteers. The latency and amplitude of visual P300 in response to the Japanese ideogram stimuli (a pair of familiar Kanji characters or unfamiliar Kanji characters) and a pair of meaningless complicated figures were measured. In almost all MR patients visual P300 was observed, however, the peak latency was significantly prolonged compared to control subjects. There was no significant difference of P300 latency among the three tasks. The distribution pattern of P300 in MR patients was different from that in the controls and the amplitudes in the frontal region was larger in MR patients. The latency decreased with age even in both groups. The developmental change of P300 latency corresponded to developmental age rather than the chronological age. These findings suggest that MR patients have impairment in processing of visual perception. Assessment of P300 latencies to the visual stimuli may be useful as an objective indicator of mental deficit.
Cavanagh, Patrick
2011-01-01
Visual cognition, high-level vision, mid-level vision and top-down processing all refer to decision-based scene analyses that combine prior knowledge with retinal input to generate representations. The label “visual cognition” is little used at present, but research and experiments on mid- and high-level, inference-based vision have flourished, becoming in the 21st century a significant, if often understated part, of current vision research. How does visual cognition work? What are its moving parts? This paper reviews the origins and architecture of visual cognition and briefly describes some work in the areas of routines, attention, surfaces, objects, and events (motion, causality, and agency). Most vision scientists avoid being too explicit when presenting concepts about visual cognition, having learned that explicit models invite easy criticism. What we see in the literature is ample evidence for visual cognition, but few or only cautious attempts to detail how it might work. This is the great unfinished business of vision research: at some point we will be done with characterizing how the visual system measures the world and we will have to return to the question of how vision constructs models of objects, surfaces, scenes, and events. PMID:21329719
Odors Bias Time Perception in Visual and Auditory Modalities
Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang
2016-01-01
Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of attentional deployment between the inducers (odors) and emotionally neutral stimuli (visual dots and sound beeps). PMID:27148143
ERIC Educational Resources Information Center
Moench, Candice
2012-01-01
This qualitative study focused on the use of multiliteracies (reading, writing, viewing, visually representing, talking, and listening) by four low-income African American LBT (lesbian, bisexual, transgender) adolescents in an out-of-school setting. Data collection methods over a three-month period included transcribed field notes, interviews,…
Sklar, A E; Sarter, N B
1999-12-01
Observed breakdowns in human-machine communication can be explained, in part, by the nature of current automation feedback, which relies heavily on focal visual attention. Such feedback is not well suited for capturing attention in case of unexpected changes and events or for supporting the parallel processing of large amounts of data in complex domains. As suggested by multiple-resource theory, one possible solution to this problem is to distribute information across various sensory modalities. A simulator study was conducted to compare the effectiveness of visual, tactile, and redundant visual and tactile cues for indicating unexpected changes in the status of an automated cockpit system. Both tactile conditions resulted in higher detection rates for, and faster response times to, uncommanded mode transitions. Tactile feedback did not interfere with, nor was its effectiveness affected by, the performance of concurrent visual tasks. The observed improvement in task-sharing performance indicates that the introduction of tactile feedback is a promising avenue toward better supporting human-machine communication in event-driven, information-rich domains.
Liu, Baolin; Wang, Zhongning; Jin, Zhixing
2009-09-11
In real life, the human brain usually receives information through visual and auditory channels and processes the multisensory information, but studies on the integration processing of the dynamic visual and auditory information are relatively few. In this paper, we have designed an experiment, where through the presentation of common scenario, real-world videos, with matched and mismatched actions (images) and sounds as stimuli, we aimed to study the integration processing of synchronized visual and auditory information in videos of real-world events in the human brain, through the use event-related potentials (ERPs) methods. Experimental results showed that videos of mismatched actions (images) and sounds would elicit a larger P400 as compared to videos of matched actions (images) and sounds. We believe that the P400 waveform might be related to the cognitive integration processing of mismatched multisensory information in the human brain. The results also indicated that synchronized multisensory information would interfere with each other, which would influence the results of the cognitive integration processing.
Sanseau, Ana; Sampaolesi, Juan; Suzuki, Emilio Rintaro; Lopes, Joao Franca; Borel, Hector
2013-01-01
To assess ocular discomfort upon instillation and patient preference for brinzolamide/timolol relative to dorzolamide/timolol, in patients with open-angle glaucoma or ocular hypertension. This was a multicenter, prospective, patient-masked, randomized, crossover study. On day 0, patients received one drop of brinzolamide/timolol in one eye and one drop of dorzolamide/timolol in the contralateral eye. On day 1, patients were randomly assigned to receive one drop of either brinzolamide/timolol or dorzolamide/timolol in both eyes; on day 2, patients received one drop of the alternate treatment in both eyes. Measures included a patient preference question on day 2 (primary) and mean ocular discomfort scale scores on days 1 and 2 (secondary). Safety assessments included adverse events, visual acuity, and slit-lamp examinations. Of 120 patients who enrolled, 115 completed the study. Of these, 112 patients instilled both medications and expressed a study medication preference on day 2. A significantly greater percentage preferred brinzolamide/timolol to dorzolamide/timolol (67.0% versus 30.4%; P < 0.001). The ocular discomfort (expressed as mean [standard deviation]) with brinzolamide/timolol was significantly lower than with dorzolamide/timolol (day 2:1.9 [2.3] versus 3.7 [2.8], respectively [P = 0.0003]; both days combined: 2.1 [2.5] versus 3.5 [2.9], respectively [P = 0.00014]). On day 1, five patients receiving brinzolamide/timolol reported five nonserious adverse events (AEs): flu (n = 1), bitter taste (n = 2), and headache (n = 2). Four events, bitter taste (two events) and headache (two events), were considered related to brinzolamide/timolol. Events were mild in intensity, except bitter taste of moderate intensity reported by one patient. No AEs were reported at day 2. All AEs resolved without additional treatment. No clinically relevant changes from baseline were observed in best-corrected visual acuity or slit-lamp examinations of ocular signs. Patients had less discomfort with brinzolamide/timolol than with dorzolamide/timolol, and more expressed a preference for brinzolamide/timolol. Both treatments were generally safe and well tolerated.
Oculomotor Reflexes as a Test of Visual Dysfunctions in Cognitively Impaired Observers
2013-09-01
right. Gaze horizontal position is plotted along the y-axis. The red bar indicates a visual nystagmus event detected by the filter. (d) A mild curse word...experimental conditions were chosen to simulate testing cognitively impaired observers. Reflex Stimulus Functions Visual Nystagmus luminance grating low-level...developed a new stimulus for visual nystagmus to 8 test visual motion processing in the presence of incoherent motion noise. The drifting equiluminant
Effects of cues to event segmentation on subsequent memory.
Gold, David A; Zacks, Jeffrey M; Flores, Shaney
2017-01-01
To remember everyday activity it is important to encode it effectively, and one important component of everyday activity is that it consists of events. People who segment activity into events more adaptively have better subsequent memory for that activity, and event boundaries are remembered better than event middles. The current study asked whether intervening to improve segmentation by cuing effective event boundaries would enhance subsequent memory for events. We selected a set of movies that had previously been segmented by a large sample of observers and edited them to provide visual and auditory cues to encourage segmentation. For each movie, cues were placed either at event boundaries or event middles, or the movie was left unedited. To further support the encoding of our everyday event movies, we also included post-viewing summaries of the movies. We hypothesized that cuing at event boundaries would improve memory, and that this might reduce age differences in memory. For both younger and older adults, we found that cuing event boundaries improved memory-particularly for the boundaries that were cued. Cuing event middles also improved memory, though to a lesser degree; this suggests that imposing a segmental structure on activity may facilitate memory encoding, even when segmentation is not optimal. These results provide evidence that structural cuing can improve memory for everyday events in younger and older adults.
Kujala, Tuomo; Mäkelä, Jakke; Kotilainen, Ilkka; Tokkonen, Timo
2016-02-01
We studied the utility of occlusion distance as a function of task-relevant event density in realistic traffic scenarios with self-controlled speed. The visual occlusion technique is an established method for assessing visual demands of driving. However, occlusion time is not a highly informative measure of environmental task-relevant event density in self-paced driving scenarios because it partials out the effects of changes in driving speed. Self-determined occlusion times and distances of 97 drivers with varying backgrounds were analyzed in driving scenarios simulating real Finnish suburban and highway traffic environments with self-determined vehicle speed. Occlusion distances varied systematically with the expected environmental demands of the manipulated driving scenarios whereas the distributions of occlusion times remained more static across the scenarios. Systematic individual differences in the preferred occlusion distances were observed. More experienced drivers achieved better lane-keeping accuracy than inexperienced drivers with similar occlusion distances; however, driving experience was unexpectedly not a major factor for the preferred occlusion distances. Occlusion distance seems to be an informative measure for assessing task-relevant event density in realistic traffic scenarios with self-controlled speed. Occlusion time measures the visual demand of driving as the task-relevant event rate in time intervals, whereas occlusion distance measures the experienced task-relevant event density in distance intervals. The findings can be utilized in context-aware distraction mitigation systems, human-automated vehicle interaction, road speed prediction and design, as well as in the testing of visual in-vehicle tasks for inappropriate in-vehicle glancing behaviors in any dynamic traffic scenario for which appropriate individual occlusion distances can be defined. © 2015, Human Factors and Ergonomics Society.
``From Earth to the Solar System'' Traveling Exhibit Visits Puerto Rico
NASA Astrophysics Data System (ADS)
Pantoja, C. A.; Lebrón, M. E.; Isidro, G. M.
2013-04-01
Puerto Rico was selected as one of the venues for the exhibit “From Earth to the Solar System” (FETTSS) during the month of October 2011. A set of outreach activities were organized to take place during the month of October aligned with the FETTSS themes. These activities included the following: 1) Main Exhibit, 2) Guided tours for school groups, 3) Planet Festival, 4) Film Festival and 5) Astronomy Conferences. We describe this experience and in particular the work with a group of undergraduate students from the University of Puerto Rico (UPR) that assisted in the outreach events. Among this group were three blind students. The FETTSS exhibit included a set of tactile and Braille images for the blind and visually impaired. A special exhibit was prepared with additional adapted materials for the visually impaired. This allowed blind visitors to participate and the general public to become more aware of the needs of this population.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gunter, Dan; Lee, Jason; Stoufer, Martin
2003-03-28
The NetLogger Toolkit is designed to monitor, under actual operating conditions, the behavior of all the elements of the application-to-application communication path in order to determine exactly where time is spent within a complex system Using NetLogger, distnbuted application components are modified to produce timestamped logs of "interesting" events at all the critical points of the distributed system Events from each component are correlated, which allov^ one to characterize the performance of all aspects of the system and network in detail. The NetLogger Toolkit itself consists of four components an API and library of functions to simplify the generation ofmore » application-level event logs, a set of tools for collecting and sorting log files, an event archive system, and a tool for visualization and analysis of the log files In order to instrument an application to produce event logs, the application developer inserts calls to the NetLogger API at all the critical points in the code, then links the application with the NetLogger library All the tools in the NetLogger Toolkit share a common log format, and assume the existence of accurate and synchronized system clocks NetLogger messages can be logged using an easy-to-read text based format based on the lETF-proposed ULM format, or a binary format that can still be used through the same API but that is several times faster and smaller, with performance comparable or better than binary message formats such as MPI, XDR, SDDF-Binary, and PBIO. The NetLogger binary format is both highly efficient and self-describing, thus optimized for the dynamic message construction and parsing of application instrumentation. NetLogger includes an "activation" API that allows NetLogger logging to be turned on, off, or modified by changing an external file This IS useful for activating logging in daemons/services (e g GndFTP server). The NetLogger reliability API provides the ability to specify backup logging locations and penodically try to reconnect broken TCP pipe. A typical use for this is to store data on local disk while net is down. An event archiver can log one or more incoming NetLogger streams to a local disk file (netlogd) or to a mySQL database (netarchd). We have found exploratory, visual analysis of the log event data to be the most useful means of determining the causes of performance anomalies The NetLogger Visualization tool, niv, has been developed to provide a flexible and interactive graphical representation of system-level and application-level events.« less
ERIC Educational Resources Information Center
Lindsay, D. Stephen; Allen, Bem P.; Chan, Jason C. K.; Dahl, Leora C.
2004-01-01
We explored the effect of the degree of conceptual similarity between a witnessed event and an extra-event narrative on eyewitness suggestibility. Experiments 1A and 1B replicated Allen and Lindsay's (1998) finding that subjects sometimes intrude details from a narrative description of one event into their reports of a different visual event.…
Listeners' expectation of room acoustical parameters based on visual cues
NASA Astrophysics Data System (ADS)
Valente, Daniel L.
Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audio-visual study, in which participants are instructed to make spatial congruency and quantity judgments in dynamic cross-modal environments. The results of these psychophysical tests suggest the importance of consilient audio-visual presentation to the legibility of an auditory scene. Several studies have looked into audio-visual interaction in room perception in recent years, but these studies rely on static images, speech signals, or photographs alone to represent the visual scene. Building on these studies, the aim is to propose a testing method that uses monochromatic compositing (blue-screen technique) to position a studio recording of a musical performance in a number of virtual acoustical environments and ask subjects to assess these environments. In the first experiment of the study, video footage was taken from five rooms varying in physical size from a small studio to a small performance hall. Participants were asked to perceptually align two distinct acoustical parameters---early-to-late reverberant energy ratio and reverberation time---of two solo musical performances in five contrasting visual environments according to their expectations of how the room should sound given its visual appearance. In the second experiment in the study, video footage shot from four different listening positions within a general-purpose space was coupled with sounds derived from measured binaural impulse responses (IRs). The relationship between the presented image, sound, and virtual receiver position was examined. It was found that many visual cues caused different perceived events of the acoustic environment. This included the visual attributes of the space in which the performance was located as well as the visual attributes of the performer. The addressed visual makeup of the performer included: (1) an actual video of the performance, (2) a surrogate image of the performance, for example a loudspeaker's image reproducing the performance, (3) no visual image of the performance (empty room), or (4) a multi-source visual stimulus (actual video of the performance coupled with two images of loudspeakers positioned to the left and right of the performer). For this experiment, perceived auditory events of sound were measured in terms of two subjective spatial metrics: Listener Envelopment (LEV) and Apparent Source Width (ASW) These metrics were hypothesized to be dependent on the visual imagery of the presented performance. Data was also collected by participants matching direct and reverberant sound levels for the presented audio-visual scenes. In the final experiment, participants judged spatial expectations of an ensemble of musicians presented in the five physical spaces from Experiment 1. Supporting data was accumulated in two stages. First, participants were given an audio-visual matching test, in which they were instructed to align the auditory width of a performing ensemble to a varying set of audio and visual cues. In the second stage, a conjoint analysis design paradigm was explored to extrapolate the relative magnitude of explored audio-visual factors in affecting three assessed response criteria: Congruency (the perceived match-up of the auditory and visual cues in the assessed performance), ASW and LEV. Results show that both auditory and visual factors affect the collected responses, and that the two sensory modalities coincide in distinct interactions. This study reveals participant resiliency in the presence of forced auditory-visual mismatch: Participants are able to adjust the acoustic component of the cross-modal environment in a statistically similar way despite randomized starting values for the monitored parameters. Subjective results of the experiments are presented along with objective measurements for verification.
Lee, Young-Sook; Chung, Wan-Young
2012-01-01
Vision-based abnormal event detection for home healthcare systems can be greatly improved using visual sensor-based techniques able to detect, track and recognize objects in the scene. However, in moving object detection and tracking processes, moving cast shadows can be misclassified as part of objects or moving objects. Shadow removal is an essential step for developing video surveillance systems. The goal of the primary is to design novel computer vision techniques that can extract objects more accurately and discriminate between abnormal and normal activities. To improve the accuracy of object detection and tracking, our proposed shadow removal algorithm is employed. Abnormal event detection based on visual sensor by using shape features variation and 3-D trajectory is presented to overcome the low fall detection rate. The experimental results showed that the success rate of detecting abnormal events was 97% with a false positive rate of 2%. Our proposed algorithm can allow distinguishing diverse fall activities such as forward falls, backward falls, and falling asides from normal activities. PMID:22368486
Student profiling on university co-curriculum activities using data visualization tools
NASA Astrophysics Data System (ADS)
Jamil, Jastini Mohd.; Shaharanee, Izwan Nizal Mohd
2017-11-01
Co-curricular activities are playing a vital role in the development of a holistic student. Co-curriculum can be described as an extension of the formal learning experiences in a course or academic program. There are many co-curriculum activities such as students' participation in sports, volunteerism, leadership, entrepreneurship, uniform body, student council, and other social events. The number of student involves in co-curriculum activities are large, thus creating an enormous volume of data including their demographic facts, academic performance and co-curriculum types. The task for discovering and analyzing these information becomes increasingly difficult and hard to comprehend. Data visualization offer a better ways in handling with large volume of information. The need for an understanding of these various co-curriculum activities and their effect towards student performance are essential. Visualizing these information can help related stakeholders to become aware of hidden and interesting information from large amount of data drowning in their student data. The main objective of this study is to provide a clearer understanding of the different trends hidden in the student co-curriculum activities data with related to their activities and academic performances. The data visualization software was used to help visualize the data extracted from the database.
De Freitas, Julian; Alvarez, George A
2018-05-28
To what extent are people's moral judgments susceptible to subtle factors of which they are unaware? Here we show that we can change people's moral judgments outside of their awareness by subtly biasing perceived causality. Specifically, we used subtle visual manipulations to create visual illusions of causality in morally relevant scenarios, and this systematically changed people's moral judgments. After demonstrating the basic effect using simple displays involving an ambiguous car collision that ends up injuring a person (E1), we show that the effect is sensitive on the millisecond timescale to manipulations of task-irrelevant factors that are known to affect perceived causality, including the duration (E2a) and asynchrony (E2b) of specific task-irrelevant contextual factors in the display. We then conceptually replicate the effect using a different paradigm (E3a), and also show that we can eliminate the effect by interfering with motion processing (E3b). Finally, we show that the effect generalizes across different kinds of moral judgments (E3c). Combined, these studies show that obligatory, abstract inferences made by the visual system influence moral judgments. Copyright © 2018 Elsevier B.V. All rights reserved.
Bring It to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis.
Stein, Manuel; Janetzko, Halldor; Lamprecht, Andreas; Breitkreutz, Thorsten; Zimmermann, Philipp; Goldlucke, Bastian; Schreck, Tobias; Andrienko, Gennady; Grossniklaus, Michael; Keim, Daniel A
2018-01-01
Analysts in professional team sport regularly perform analysis to gain strategic and tactical insights into player and team behavior. Goals of team sport analysis regularly include identification of weaknesses of opposing teams, or assessing performance and improvement potential of a coached team. Current analysis workflows are typically based on the analysis of team videos. Also, analysts can rely on techniques from Information Visualization, to depict e.g., player or ball trajectories. However, video analysis is typically a time-consuming process, where the analyst needs to memorize and annotate scenes. In contrast, visualization typically relies on an abstract data model, often using abstract visual mappings, and is not directly linked to the observed movement context anymore. We propose a visual analytics system that tightly integrates team sport video recordings with abstract visualization of underlying trajectory data. We apply appropriate computer vision techniques to extract trajectory data from video input. Furthermore, we apply advanced trajectory and movement analysis techniques to derive relevant team sport analytic measures for region, event and player analysis in the case of soccer analysis. Our system seamlessly integrates video and visualization modalities, enabling analysts to draw on the advantages of both analysis forms. Several expert studies conducted with team sport analysts indicate the effectiveness of our integrated approach.
Assessing natural hazard risk using images and data
NASA Astrophysics Data System (ADS)
Mccullough, H. L.; Dunbar, P. K.; Varner, J. D.; Mungov, G.
2012-12-01
Photographs and other visual media provide valuable pre- and post-event data for natural hazard assessment. Scientific research, mitigation, and forecasting rely on visual data for risk analysis, inundation mapping and historic records. Instrumental data only reveal a portion of the whole story; photographs explicitly illustrate the physical and societal impacts from the event. Visual data is rapidly increasing as the availability of portable high resolution cameras and video recorders becomes more attainable. Incorporating these data into archives ensures a more complete historical account of events. Integrating natural hazards data, such as tsunami, earthquake and volcanic eruption events, socio-economic information, and tsunami deposits and runups along with images and photographs enhances event comprehension. Global historic databases at NOAA's National Geophysical Data Center (NGDC) consolidate these data, providing the user with easy access to a network of information. NGDC's Natural Hazards Image Database (ngdc.noaa.gov/hazardimages) was recently improved to provide a more efficient and dynamic user interface. It uses the Google Maps API and Keyhole Markup Language (KML) to provide geographic context to the images and events. Descriptive tags, or keywords, have been applied to each image, enabling easier navigation and discovery. In addition, the Natural Hazards Map Viewer (maps.ngdc.noaa.gov/viewers/hazards) provides the ability to search and browse data layers on a Mercator-projection globe with a variety of map backgrounds. This combination of features creates a simple and effective way to enhance our understanding of hazard events and risks using imagery.
Adaptation in human visual cortex as a mechanism for rapid discrimination of aversive stimuli.
Keil, Andreas; Stolarova, Margarita; Moratti, Stephan; Ray, William J
2007-06-01
The ability to react rapidly and efficiently to adverse stimuli is crucial for survival. Neuroscience and behavioral studies have converged to show that visual information associated with aversive content is processed quickly and accurately and is associated with rapid amplification of the neural responses. In particular, unpleasant visual information has repeatedly been shown to evoke increased cortical activity during early visual processing between 60 and 120 ms following the onset of a stimulus. However, the nature of these early responses is not well understood. Using neutral versus unpleasant colored pictures, the current report examines the time course of short-term changes in the human visual cortex when a subject is repeatedly exposed to simple grating stimuli in a classical conditioning paradigm. We analyzed changes in amplitude and synchrony of large-scale oscillatory activity across 2 days of testing, which included baseline measurements, 2 conditioning sessions, and a final extinction session. We found a gradual increase in amplitude and synchrony of very early cortical oscillations in the 20-35 Hz range across conditioning sessions, specifically for conditioned stimuli predicting aversive visual events. This increase for conditioned stimuli affected stimulus-locked cortical oscillations at a latency of around 60-90 ms and disappeared during extinction. Our findings suggest that reorganization of neural connectivity on the level of the visual cortex acts to optimize early perception of specific features indicative of emotional relevance.
Tools for educational access to seismic data
NASA Astrophysics Data System (ADS)
Taber, J. J.; Welti, R.; Bravo, T. K.; Hubenthal, M.; Frechette, K.
2017-12-01
Student engagement can be increased both by providing easy access to real data, and by addressing newsworthy events such as recent large earthquakes. IRIS EPO has a suite of access and visualization tools that can be used for such engagement, including a set of three tools that allow students to explore global seismicity, use seismic data to determine Earth structure, and view and analyze near-real-time ground motion data in the classroom. These tools are linked to online lessons that are designed for use in middle school through introductory undergraduate classes. The IRIS Earthquake Browser allows discovery of key aspects of plate tectonics, earthquake locations (in pseudo 3D) and seismicity rates and patterns. IEB quickly displays up to 20,000 seismic events over up to 30 years, making it one of the most responsive, practical ways to visualize historical seismicity in a browser. Maps are bookmarkable and preserve state, meaning IEB map links can be shared or worked into a lesson plan. The Global Seismogram Plotter automatically creates visually clear seismic record sections from selected large earthquakes that are tablet-friendly and can also to be printed for use in a classroom without computers. The plots are designed to be appropriate for use with no parameters to set, but users can also modify the plots, such as including a recording station near a chosen location. A guided exercise is provided where students use the record section to discover the diameter of Earth's outer core. Students can pick and compare phase arrival times onscreen which is key to performing the exercise. A companion station map shows station locations and further information and is linked to the record section. jAmaSeis displays seismic data in real-time from either a local instrument and/or from remote seismic stations that stream data using standard seismic data protocols, and can be used in the classroom or as a public display. Users can filter data, fit a seismogram to travel time curves, triangulate event epicenters on a globe, estimate event magnitudes, and generate images showing seismograms and corresponding calculations. All three tools access seismic databases curated by IRIS Data Services. In addition, jAmaseis also can access data from non-IRIS sources.
Analysis and visualization of single-trial event-related potentials
NASA Technical Reports Server (NTRS)
Jung, T. P.; Makeig, S.; Westerfield, M.; Townsend, J.; Courchesne, E.; Sejnowski, T. J.
2001-01-01
In this study, a linear decomposition technique, independent component analysis (ICA), is applied to single-trial multichannel EEG data from event-related potential (ERP) experiments. Spatial filters derived by ICA blindly separate the input data into a sum of temporally independent and spatially fixed components arising from distinct or overlapping brain or extra-brain sources. Both the data and their decomposition are displayed using a new visualization tool, the "ERP image," that can clearly characterize single-trial variations in the amplitudes and latencies of evoked responses, particularly when sorted by a relevant behavioral or physiological variable. These tools were used to analyze data from a visual selective attention experiment on 28 control subjects plus 22 neurological patients whose EEG records were heavily contaminated with blink and other eye-movement artifacts. Results show that ICA can separate artifactual, stimulus-locked, response-locked, and non-event-related background EEG activities into separate components, a taxonomy not obtained from conventional signal averaging approaches. This method allows: (1) removal of pervasive artifacts of all types from single-trial EEG records, (2) identification and segregation of stimulus- and response-locked EEG components, (3) examination of differences in single-trial responses, and (4) separation of temporally distinct but spatially overlapping EEG oscillatory activities with distinct relationships to task events. The proposed methods also allow the interaction between ERPs and the ongoing EEG to be investigated directly. We studied the between-subject component stability of ICA decomposition of single-trial EEG epochs by clustering components with similar scalp maps and activation power spectra. Components accounting for blinks, eye movements, temporal muscle activity, event-related potentials, and event-modulated alpha activities were largely replicated across subjects. Applying ICA and ERP image visualization to the analysis of sets of single trials from event-related EEG (or MEG) experiments can increase the information available from ERP (or ERF) data. Copyright 2001 Wiley-Liss, Inc.
Visual Journaling: Engaging Adolescents in Sketchbook Activities
ERIC Educational Resources Information Center
Cummings, Karen L.
2011-01-01
A wonderful way to engage high-school students in sketchbook activities is to have them create journals that combine images with words to convey emotions, ideas, and understandings. Visual journaling is a creative way for them to share their experiences and personal responses to life's events in visual and written form. Through selecting and…
ERIC Educational Resources Information Center
Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Friederici, Angela D.
2016-01-01
Successful communication in everyday life crucially involves the processing of auditory and visual components of speech. Viewing our interlocutor and processing visual components of speech facilitates speech processing by triggering auditory processing. Auditory phoneme processing, analyzed by event-related brain potentials (ERP), has been shown…
A Cortical Network for the Encoding of Object Change
Hindy, Nicholas C.; Solomon, Sarah H.; Altmann, Gerry T.M.; Thompson-Schill, Sharon L.
2015-01-01
Understanding events often requires recognizing unique stimuli as alternative, mutually exclusive states of the same persisting object. Using fMRI, we examined the neural mechanisms underlying the representation of object states and object-state changes. We found that subjective ratings of visual dissimilarity between a depicted object and an unseen alternative state of that object predicted the corresponding multivoxel pattern dissimilarity in early visual cortex during an imagery task, while late visual cortex patterns tracked dissimilarity among distinct objects. Early visual cortex pattern dissimilarity for object states in turn predicted the level of activation in an area of left posterior ventrolateral prefrontal cortex (pVLPFC) most responsive to conflict in a separate Stroop color-word interference task, and an area of left ventral posterior parietal cortex (vPPC) implicated in the relational binding of semantic features. We suggest that when visualizing object states, representational content instantiated across early and late visual cortex is modulated by processes in left pVLPFC and left vPPC that support selection and binding, and ultimately event comprehension. PMID:24127425
Role of orientation reference selection in motion sickness
NASA Technical Reports Server (NTRS)
Peterka, Robert J.; Black, F. Owen
1988-01-01
Previous experiments with moving platform posturography have shown that different people have varying abilities to resolve conflicts among vestibular, visual, and proprioceptive sensory signals used to control upright posture. In particular, there is one class of subjects with a vestibular disorder known as benign paroxysmal positional vertigo (BPPV) who often are particularly sensitive to inaccurate visual information. That is, they will use visual sensory information for the control of their posture even when that visual information is inaccurate and is in conflict with accurate proprioceptive and vestibular sensory signals. BPPV has been associated with disorders of both posterior semicircular canal function and possibly otolith function. The present proposal hopes to take advantage of the similarities between the space motion sickness problem and the sensory orientation reference selection problems associated with the BPPV syndrome. These similarities include both etiology related to abnormal vertical canal-otolith function, and motion sickness initiating events provoked by pitch and roll head movements. The objectives of this proposal are to explore and quantify the orientation reference selection abilities of subjects and the relation of this selection to motion sickness in humans.
Portella, Claudio; Machado, Sergio; Arias-Carrión, Oscar; Sack, Alexander T.; Silva, Julio Guilherme; Orsini, Marco; Leite, Marco Antonio Araujo; Silva, Adriana Cardoso; Nardi, Antonio E.; Cagy, Mauricio; Piedade, Roberto; Ribeiro, Pedro
2012-01-01
The brain is capable of elaborating and executing different stages of information processing. However, exactly how these stages are processed in the brain remains largely unknown. This study aimed to analyze the possible correlation between early and late stages of information processing by assessing the latency to, and amplitude of, early and late event-related potential (ERP) components, including P200, N200, premotor potential (PMP) and P300, in healthy participants in the context of a visual oddball paradigm. We found a moderate positive correlation among the latency of P200 (electrode O2), N200 (electrode O2), PMP (electrode C3), P300 (electrode PZ) and the reaction time (RT). In addition, moderate negative correlation between the amplitude of P200 and the latencies of N200 (electrode O2), PMP (electrode C3), P300 (electrode PZ) was found. Therefore, we propose that if the secondary processing of visual input (P200 latency) occurs faster, the following will also happen sooner: discrimination and classification process of this input (N200 latency), motor response processing (PMP latency), reorganization of attention and working memory update (P300 latency), and RT. N200, PMP, and P300 latencies are also anticipated when higher activation level of occipital areas involved in the secondary processing of visual input rise (P200 amplitude). PMID:23355929
Age-related decline in bottom-up processing and selective attention in the very old.
Zhuravleva, Tatyana Y; Alperin, Brittany R; Haring, Anna E; Rentz, Dorene M; Holcomb, Philip J; Daffner, Kirk R
2014-06-01
Previous research demonstrating age-related deficits in selective attention have not included old-old adults, an increasingly important group to study. The current investigation compared event-related potentials in 15 young-old (65-79 years old) and 23 old-old (80-99 years old) subjects during a color-selective attention task. Subjects responded to target letters in a specified color (Attend) while ignoring letters in a different color (Ignore) under both low and high loads. There were no group differences in visual acuity, accuracy, reaction time, or latency of early event-related potential components. The old-old group showed a disruption in bottom-up processing, indexed by a substantially diminished posterior N1 (smaller amplitude). They also demonstrated markedly decreased modulation of bottom-up processing based on selected visual features, indexed by the posterior selection negativity (SN), with similar attenuation under both loads. In contrast, there were no group differences in frontally mediated attentional selection, measured by the anterior selection positivity (SP). There was a robust inverse relationship between the size of the SN and SP (the smaller the SN, the larger the SP), which may represent an anteriorly supported compensatory mechanism. In the absence of a decline in top-down modulation indexed by the SP, the diminished SN may reflect age-related degradation of early bottom-up visual processing in old-old adults.
Beukelman, David R; Hux, Karen; Dietz, Aimee; McKelvey, Miechelle; Weissling, Kristy
2015-01-01
Research about the effectiveness of communicative supports and advances in photographic technology has prompted changes in the way speech-language pathologists design and implement interventions for people with aphasia. The purpose of this paper is to describe the use of photographic images as a basis for developing communication supports for people with chronic aphasia secondary to sudden-onset events due to cerebrovascular accidents (strokes). Topics include the evolution of AAC-based supports as they relate to people with aphasia, the development and key features of visual scene displays (VSDs), and future directions concerning the incorporation of photographs into communication supports for people with chronic and severe aphasia.
Proof of Concept for a Simple Smartphone Sky Monitor
NASA Astrophysics Data System (ADS)
Kantamneni, Abhilash; Nemiroff, R. J.; Brisbois, C.
2013-01-01
We present a novel approach of obtaining a cloud and bright sky monitor by using a standard smartphone with a downloadable app. The addition of an inexpensive fisheye lens can extend the angular range to the entire sky visible above the device. A preliminary proof of concept image shows an optical limit of about visual magnitude 5 for a 70-second exposure. Support science objectives include cloud monitoring in a manner similar to the more expensive cloud monitors in use at most major astronomical observatories, making expensive observing time at these observatories more efficient. Primary science objectives include bright meteor tracking, bright comet tracking, and monitoring the variability of bright stars. Citizen science objectives include crowd sourcing of many networked sky monitoring smartphones typically in broader support of many of the primary science goals. The deployment of a citizen smartphone array in an active science mode could leverage the sky monitoring data infrastructure to track other non-visual science opportunities, including monitoring the Earth's magnetic field for the effects of solar flares and exhaustive surface coverage for strong seismic events.
Alpha-Band Rhythms in Visual Task Performance: Phase-Locking by Rhythmic Sensory Stimulation
de Graaf, Tom A.; Gross, Joachim; Paterson, Gavin; Rusch, Tessa; Sack, Alexander T.; Thut, Gregor
2013-01-01
Oscillations are an important aspect of neuronal activity. Interestingly, oscillatory patterns are also observed in behaviour, such as in visual performance measures after the presentation of a brief sensory event in the visual or another modality. These oscillations in visual performance cycle at the typical frequencies of brain rhythms, suggesting that perception may be closely linked to brain oscillations. We here investigated this link for a prominent rhythm of the visual system (the alpha-rhythm, 8–12 Hz) by applying rhythmic visual stimulation at alpha-frequency (10.6 Hz), known to lead to a resonance response in visual areas, and testing its effects on subsequent visual target discrimination. Our data show that rhythmic visual stimulation at 10.6 Hz: 1) has specific behavioral consequences, relative to stimulation at control frequencies (3.9 Hz, 7.1 Hz, 14.2 Hz), and 2) leads to alpha-band oscillations in visual performance measures, that 3) correlate in precise frequency across individuals with resting alpha-rhythms recorded over parieto-occipital areas. The most parsimonious explanation for these three findings is entrainment (phase-locking) of ongoing perceptually relevant alpha-band brain oscillations by rhythmic sensory events. These findings are in line with occipital alpha-oscillations underlying periodicity in visual performance, and suggest that rhythmic stimulation at frequencies of intrinsic brain-rhythms can be used to reveal influences of these rhythms on task performance to study their functional roles. PMID:23555873
Tang, Xiaoyu; Li, Chunlin; Li, Qi; Gao, Yulin; Yang, Weiping; Yang, Jingjing; Ishikawa, Soushirou; Wu, Jinglong
2013-10-11
Utilizing the high temporal resolution of event-related potentials (ERPs), we examined how visual spatial or temporal cues modulated the auditory stimulus processing. The visual spatial cue (VSC) induces orienting of attention to spatial locations; the visual temporal cue (VTC) induces orienting of attention to temporal intervals. Participants were instructed to respond to auditory targets. Behavioral responses to auditory stimuli following VSC were faster and more accurate than those following VTC. VSC and VTC had the same effect on the auditory N1 (150-170 ms after stimulus onset). The mean amplitude of the auditory P1 (90-110 ms) in VSC condition was larger than that in VTC condition, and the mean amplitude of late positivity (300-420 ms) in VTC condition was larger than that in VSC condition. These findings suggest that modulation of auditory stimulus processing by visually induced spatial or temporal orienting of attention were different, but partially overlapping. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Sata, Yoshimi; Inagaki, Masumi; Shirane, Seiko; Kaga, Makiko
2002-07-01
In order to evaluate developmental change of visual perception, the P300 event-related potentials (ERPs) of visual oddball task were recorded in 34 healthy volunteers ranging from 7 to 37 years of age. The latency and amplitude of visual P300 in response to the Japanese ideogram stimuli (a pair of familiar Kanji characters or unfamiliar Kanji characters) and a pair of meaningless complicated figures were measured. Visual P300 was dominant at parietal area in almost all subjects. There was a significant difference of P300 latency among the three tasks. Reaction time to the both kind of Kanji tasks were significantly shorter than those to the complicated figure task. P300 latencies to the familiar Kanji, unfamiliar Kanji and figure stimuli decreased until 25.8, 26.9 and 29.4 years of age, respectively, and regression analysis revealed that a positive quadratic function could be fitted to the data. Around 9 years of age, the P300 latency/age slope was largest in the unfamiliar Kanji task. These findings suggest that visual P300 development depends on both the complexity of the tasks and specificity of the stimuli, which might reflect the variety in visual information processing.
Getting the Gist of Events: Recognition of Two-Participant Actions from Brief Displays
Hafri, Alon; Papafragou, Anna; Trueswell, John C.
2013-01-01
Unlike rapid scene and object recognition from brief displays, little is known about recognition of event categories and event roles from minimal visual information. In three experiments, we displayed naturalistic photographs of a wide range of two-participant event scenes for 37 ms and 73 ms followed by a mask, and found that event categories (the event gist, e.g., ‘kicking’, ‘pushing’, etc.) and event roles (i.e., Agent and Patient) can be recognized rapidly, even with various actor pairs and backgrounds. Norming ratings from a subsequent experiment revealed that certain physical features (e.g., outstretched extremities) that correlate with Agent-hood could have contributed to rapid role recognition. In a final experiment, using identical twin actors, we then varied these features in two sets of stimuli, in which Patients had Agent-like features or not. Subjects recognized the roles of event participants less accurately when Patients possessed Agent-like features, with this difference being eliminated with two-second durations. Thus, given minimal visual input, typical Agent-like physical features are used in role recognition but, with sufficient input from multiple fixations, people categorically determine the relationship between event participants. PMID:22984951
Jolij, Jacob; Scholte, H Steven; van Gaal, Simon; Hodgson, Timothy L; Lamme, Victor A F
2011-12-01
Humans largely guide their behavior by their visual representation of the world. Recent studies have shown that visual information can trigger behavior within 150 msec, suggesting that visually guided responses to external events, in fact, precede conscious awareness of those events. However, is such a view correct? By using a texture discrimination task, we show that the brain relies on long-latency visual processing in order to guide perceptual decisions. Decreasing stimulus saliency leads to selective changes in long-latency visually evoked potential components reflecting scene segmentation. These latency changes are accompanied by almost equal changes in simple RTs and points of subjective simultaneity. Furthermore, we find a strong correlation between individual RTs and the latencies of scene segmentation related components in the visually evoked potentials, showing that the processes underlying these late brain potentials are critical in triggering a response. However, using the same texture stimuli in an antisaccade task, we found that reflexive, but erroneous, prosaccades, but not antisaccades, can be triggered by earlier visual processes. In other words: The brain can act quickly, but decides late. Differences between our study and earlier findings suggesting that action precedes conscious awareness can be explained by assuming that task demands determine whether a fast and unconscious, or a slower and conscious, representation is used to initiate a visually guided response.
Construction and updating of event models in auditory event processing.
Huff, Markus; Maurer, Annika E; Brich, Irina; Pagenkopf, Anne; Wickelmaier, Florian; Papenmeier, Frank
2018-02-01
Humans segment the continuous stream of sensory information into distinct events at points of change. Between 2 events, humans perceive an event boundary. Present theories propose changes in the sensory information to trigger updating processes of the present event model. Increased encoding effort finally leads to a memory benefit at event boundaries. Evidence from reading time studies (increased reading times with increasing amount of change) suggest that updating of event models is incremental. We present results from 5 experiments that studied event processing (including memory formation processes and reading times) using an audio drama as well as a transcript thereof as stimulus material. Experiments 1a and 1b replicated the event boundary advantage effect for memory. In contrast to recent evidence from studies using visual stimulus material, Experiments 2a and 2b found no support for incremental updating with normally sighted and blind participants for recognition memory. In Experiment 3, we replicated Experiment 2a using a written transcript of the audio drama as stimulus material, allowing us to disentangle encoding and retrieval processes. Our results indicate incremental updating processes at encoding (as measured with reading times). At the same time, we again found recognition performance to be unaffected by the amount of change. We discuss these findings in light of current event cognition theories. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Audio–visual interactions for motion perception in depth modulate activity in visual area V3A
Ogawa, Akitoshi; Macaluso, Emiliano
2013-01-01
Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414
The effects of alcohol intoxication on attention and memory for visual scenes.
Harvey, Alistair J; Kneller, Wendy; Campbell, Alison C
2013-01-01
This study tests the claim that alcohol intoxication narrows the focus of visual attention on to the more salient features of a visual scene. A group of alcohol intoxicated and sober participants had their eye movements recorded as they encoded a photographic image featuring a central event of either high or low salience. All participants then recalled the details of the image the following day when sober. We sought to determine whether the alcohol group would pay less attention to the peripheral features of the encoded scene than their sober counterparts, whether this effect of attentional narrowing was stronger for the high-salience event than for the low-salience event, and whether it would lead to a corresponding deficit in peripheral recall. Alcohol was found to narrow the focus of foveal attention to the central features of both images but did not facilitate recall from this region. It also reduced the overall amount of information accurately recalled from each scene. These findings demonstrate that the concept of alcohol myopia originally posited to explain the social consequences of intoxication (Steele & Josephs, 1990) may be extended to explain the relative neglect of peripheral information during the processing of visual scenes.
Sounds can boost the awareness of visual events through attention without cross-modal integration.
Pápai, Márta Szabina; Soto-Faraco, Salvador
2017-01-31
Cross-modal interactions can lead to enhancement of visual perception, even for visual events below awareness. However, the underlying mechanism is still unclear. Can purely bottom-up cross-modal integration break through the threshold of awareness? We used a binocular rivalry paradigm to measure perceptual switches after brief flashes or sounds which, sometimes, co-occurred. When flashes at the suppressed eye coincided with sounds, perceptual switches occurred the earliest. Yet, contrary to the hypothesis of cross-modal integration, this facilitation never surpassed the assumption of probability summation of independent sensory signals. A follow-up experiment replicated the same pattern of results using silent gaps embedded in continuous noise, instead of sounds. This manipulation should weaken putative sound-flash integration, although keep them salient as bottom-up attention cues. Additional results showed that spatial congruency between flashes and sounds did not determine the effectiveness of cross-modal facilitation, which was again not better than probability summation. Thus, the present findings fail to fully support the hypothesis of bottom-up cross-modal integration, above and beyond the independent contribution of two transient signals, as an account for cross-modal enhancement of visual events below level of awareness.
Event-related potentials during visual selective attention in children of alcoholics.
van der Stelt, O; Gunning, W B; Snel, J; Kok, A
1998-12-01
Event-related potentials were recorded from 7- to 18-year-old children of alcoholics (COAs, n = 50) and age- and sex-matched control children (n = 50) while they performed a visual selective attention task. The task was to attend selectively to stimuli with a specified color (red or blue) in an attempt to detect the occurrence of target stimuli. COAs manifested a smaller P3b amplitude to attended-target stimuli over the parietal and occipital scalp than did the controls. A more specific analysis indicated that both the attentional relevance and the target properties of the eliciting stimulus determined the observed P3b amplitude differences between COAs and controls. In contrast, no significant group differences were observed in attention-related earlier occurring event-related potential components, referred to as frontal selection positivity, selection negativity, and N2b. These results represent neurophysiological evidence that COAs suffer from deficits at a late (semantic) level of visual selective information processing that are unlikely a consequence of deficits at earlier (sensory) levels of selective processing. The findings support the notion that a reduced visual P3b amplitude in COAs represents a high-level processing dysfunction indicating their increased vulnerability to alcoholism.
Decoding the future from past experience: learning shapes predictions in early visual cortex.
Luft, Caroline D B; Meeson, Alan; Welchman, Andrew E; Kourtzi, Zoe
2015-05-01
Learning the structure of the environment is critical for interpreting the current scene and predicting upcoming events. However, the brain mechanisms that support our ability to translate knowledge about scene statistics to sensory predictions remain largely unknown. Here we provide evidence that learning of temporal regularities shapes representations in early visual cortex that relate to our ability to predict sensory events. We tested the participants' ability to predict the orientation of a test stimulus after exposure to sequences of leftward- or rightward-oriented gratings. Using fMRI decoding, we identified brain patterns related to the observers' visual predictions rather than stimulus-driven activity. Decoding of predicted orientations following structured sequences was enhanced after training, while decoding of cued orientations following exposure to random sequences did not change. These predictive representations appear to be driven by the same large-scale neural populations that encode actual stimulus orientation and to be specific to the learned sequence structure. Thus our findings provide evidence that learning temporal structures supports our ability to predict future events by reactivating selective sensory representations as early as in primary visual cortex. Copyright © 2015 the American Physiological Society.
Carbon Fiber Strand Tensile Failure Dynamic Event Characterization
NASA Technical Reports Server (NTRS)
Johnson, Kenneth L.; Reeder, James
2016-01-01
There are few if any clear, visual, and detailed images of carbon fiber strand failures under tension useful for determining mechanisms, sequences of events, different types of failure modes, etc. available to researchers. This makes discussion of physics of failure difficult. It was also desired to find out whether the test article-to-test rig interface (grip) played a part in some failures. These failures have nothing to do with stress rupture failure, thus representing a source of waste for the larger 13-00912 investigation into that specific failure type. Being able to identify or mitigate any competing failure modes would improve the value of the 13-00912 test data. The beginnings of the solution to these problems lay in obtaining images of strand failures useful for understanding physics of failure and the events leading up to failure. Necessary steps include identifying imaging techniques that result in useful data, using those techniques to home in on where in a strand and when in the sequence of events one should obtain imaging data.
Amphibian mortality events and ranavirus outbreaks in the Greater Yellowstone Ecosystem
Patla, Debra A.; St-Hilaire, Sophia; Rayburn, Andrew P.; Hossack, Blake R.; Peterson, Charles R.
2016-01-01
Mortality events in wild amphibians go largely undocumented, and where events are detected, the numbers of dead amphibians observed are probably a small fraction of actual mortality (Green and Sherman 2001; Skerratt et al. 2007). Incidental observations from field surveys can, despite limitations, provide valuable information on the presence, host species, and spatial distribution of diseases. Here we summarize amphibian mortality events and diagnoses recorded from 2000 to 2014 in three management areas: Yellowstone National Park; Grand Teton National Park (including John D. Rockefeller, Jr. Memorial Parkway); and the National Elk Refuge, which together span a large portion of protected areas within the Greater Yellowstone Ecosystem (GYE; Noss et al. 2002). Our combined amphibian monitoring projects (e.g., Gould et al. 2012) surveyed an average of 240 wetlands per year over the 15 years. Field crews recorded amphibian mortalities during visual encounter and dip-netting surveys and collected moribund and dead specimens for diagnostic examinations. Amphibian and fish research projects during these years contributed additional mortality observations, specimens, and diagnoses.
Integrated visualization of remote sensing data using Google Earth
NASA Astrophysics Data System (ADS)
Castella, M.; Rigo, T.; Argemi, O.; Bech, J.; Pineda, N.; Vilaclara, E.
2009-09-01
The need for advanced visualization tools for meteorological data has lead in the last years to the development of sophisticated software packages either by observing systems manufacturers or by third-party solution providers. For example, manufacturers of remote sensing systems such as weather radars or lightning detection systems include zoom, product selection, archive access capabilities, as well as quantitative tools for data analysis, as standard features which are highly appreciated in weather surveillance or post-event case study analysis. However, the fact that each manufacturer has its own visualization system and data formats hampers the usability and integration of different data sources. In this context, Google Earth (GE) offers the possibility of combining several graphical information types in a unique visualization system which can be easily accessed by users. The Meteorological Service of Catalonia (SMC) has been evaluating the use of GE as a visualization platform for surveillance tasks in adverse weather events. First experiences are related to the integration in real-time of remote sensing data: radar, lightning, and satellite. The tool shows the animation of the combined products in the last hour, giving a good picture of the meteorological situation. One of the main advantages of this product is that is easy to be installed in many computers and does not need high computational requirements. Besides this, the capability of GE provides information about the most affected areas by heavy rain or other weather phenomena. On the opposite, the main disadvantage is that the product offers only qualitative information, and quantitative data is only available though the graphical display (i.e. trough color scales but not associated to physical values that can be accessed by users easily). The procedure developed to run in real time is divided in three parts. First of all, a crontab file launches different applications, depending on the data type (satellite, radar, or lightning) to be treated. For each type of data, the time of launching is different, and goes from 5 (satellite and lightning) to 6 minutes (radar). The second part is the use of IDL and ENVI programs, which search in each archive file the last images in one hour. In the case of lightning data, the files are generated for the procedure, while for the others the procedure searches for existing imagery. Finally, the procedure generates metadata information required by GE, kml files, and sends them to the internal server. At the same time, in the local computer where GE is running, there exists kml files which update the information referring to the server ones. Another application that has been evaluated is the analysis of past events. In this sense, further work is devoted to develop access procedures to archived data via cgi scripts in order to retrieve and convert the information in a format suitable for GE. The presentation includes examples of the evaluation of the use of GE, and a brief comparison with other existing visualization systems available within the SMC.
Visual field progression in glaucoma: total versus pattern deviation analyses.
Artes, Paul H; Nicolela, Marcelo T; LeBlanc, Raymond P; Chauhan, Balwantray C
2005-12-01
To compare visual field progression with total and pattern deviation analyses in a prospective longitudinal study of patients with glaucoma and healthy control subjects. A group of 101 patients with glaucoma (168 eyes) with early to moderately advanced visual field loss at baseline (average mean deviation [MD], -3.9 dB) and no clinical evidence of media opacity were selected from a prospective longitudinal study on visual field progression in glaucoma. Patients were examined with static automated perimetry at 6-month intervals for a median follow-up of 9 years. At each test location, change was established with event and trend analyses of total and pattern deviation. The event analyses compared each follow-up test to a baseline obtained from averaging the first two tests, and visual field progression was defined as deterioration beyond the 5th percentile of test-retest variability at three test locations, observed on three consecutive tests. The trend analyses were based on point-wise linear regression, and visual field progression was defined as statistically significant deterioration (P < 5%) worse than -1 dB/year at three locations, confirmed by independently omitting the last and the penultimate observation. The incidence and the time-to-progression were compared between total and pattern deviation analyses. To estimate the specificity of the progression analyses, identical criteria were applied to visual fields obtained in 102 healthy control subjects, and the rate of visual field improvement was established in the patients with glaucoma and the healthy control subjects. With both event and trend methods, pattern deviation analyses classified approximately 15% fewer eyes as having progressed than did the total deviation analyses. In eyes classified as progressing by both the total and pattern deviation methods, total deviation analyses tended to detect progression earlier than the pattern deviation analyses. A comparison of the changes observed in MD and the visual fields' general height (estimated by the 85th percentile of the total deviation values) confirmed that change in the glaucomatous eyes almost always comprised a diffuse component. Pattern deviation analyses of progression may therefore underestimate the true amount of glaucomatous visual field progression. Pattern deviation analyses of visual field progression may underestimate visual field progression in glaucoma, particularly when there is no clinical evidence of increasing media opacity. Clinicians should have access to both total and pattern deviation analyses to make informed decisions on visual field progression in glaucoma.
Visualizing the Fundamental Physics of Rapid Earth Penetration Using Transparent Soils
2015-03-01
L R E P O R T DTRA-TR-14-80 Visualizing the Fundamental Physics of Rapid Earth Penetration Using Transparent Soils Approved for public... ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS...dose absorbed) roentgen shake slug torr (mm Hg, 0 C) *The bacquerel (Bq) is the SI unit of radioactivity ; 1 Bq = 1 event/s. **The Gray (GY) is
Karlsson, Kristina; Sikström, Sverker; Willander, Johan
2013-01-01
The semantic content, or the meaning, is the essence of autobiographical memories. In comparison to previous research, which has mainly focused on the phenomenological experience and the age distribution of retrieved events, the present study provides a novel view on the retrieval of event information by quantifying the information as semantic representations. We investigated the semantic representation of sensory cued autobiographical events and studied the modality hierarchy within the multimodal retrieval cues. The experiment comprised a cued recall task, where the participants were presented with visual, auditory, olfactory or multimodal retrieval cues and asked to recall autobiographical events. The results indicated that the three different unimodal retrieval cues generate significantly different semantic representations. Further, the auditory and the visual modalities contributed the most to the semantic representation of the multimodally retrieved events. Finally, the semantic representation of the multimodal condition could be described as a combination of the three unimodal conditions. In conclusion, these results suggest that the meaning of the retrieved event information depends on the modality of the retrieval cues.
Karlsson, Kristina; Sikström, Sverker; Willander, Johan
2013-01-01
The semantic content, or the meaning, is the essence of autobiographical memories. In comparison to previous research, which has mainly focused on the phenomenological experience and the age distribution of retrieved events, the present study provides a novel view on the retrieval of event information by quantifying the information as semantic representations. We investigated the semantic representation of sensory cued autobiographical events and studied the modality hierarchy within the multimodal retrieval cues. The experiment comprised a cued recall task, where the participants were presented with visual, auditory, olfactory or multimodal retrieval cues and asked to recall autobiographical events. The results indicated that the three different unimodal retrieval cues generate significantly different semantic representations. Further, the auditory and the visual modalities contributed the most to the semantic representation of the multimodally retrieved events. Finally, the semantic representation of the multimodal condition could be described as a combination of the three unimodal conditions. In conclusion, these results suggest that the meaning of the retrieved event information depends on the modality of the retrieval cues. PMID:24204561
ERIC Educational Resources Information Center
Kelly, Resa M.
2014-01-01
Molecular visualizations have been widely endorsed by many chemical educators as an efficient way to convey the dynamic and atomic-level details of chemistry events. Research indicates that students who use molecular visualizations are able to incorporate most of the intended features of the animations into their explanations. However, studies…
A Partnership across Boundaries: Arts Integration in High Schools
ERIC Educational Resources Information Center
Pennisi, Alice C.
2012-01-01
In this article, the author talks about innovative high school curricula that engage students through the study of visual art in conjunction with critical study of history and social movements. Just as historians place a premium on locating and interpreting events in time, the visual arts are centered on getting an idea across in a visual form,…
Saccadic Eye Movements Impose a Natural Bottleneck on Visual Short-Term Memory
ERIC Educational Resources Information Center
Ohl, Sven; Rolfs, Martin
2017-01-01
Visual short-term memory (VSTM) is a crucial repository of information when events unfold rapidly before our eyes, yet it maintains only a fraction of the sensory information encoded by the visual system. Here, we tested the hypothesis that saccadic eye movements provide a natural bottleneck for the transition of fragile content in sensory memory…
Korinth, Sebastian Peter; Sommer, Werner; Breznitz, Zvia
2012-01-01
Little is known about the relationship of reading speed and early visual processes in normal readers. Here we examined the association of the early P1, N170 and late N1 component in visual event-related potentials (ERPs) with silent reading speed and a number of additional cognitive skills in a sample of 52 adult German readers utilizing a Lexical Decision Task (LDT) and a Face Decision Task (FDT). Amplitudes of the N170 component in the LDT but, interestingly, also in the FDT correlated with behavioral tests measuring silent reading speed. We suggest that reading speed performance can be at least partially accounted for by the extraction of essential structural information from visual stimuli, consisting of a domain-general and a domain-specific expertise-based portion. © 2011 Elsevier Inc. All rights reserved.
Oculomotor Reflexes as a Test of Visual Dysfunctions in Cognitively Impaired Observers
2012-10-01
visual nystagmus much more robust. Because the absolute gaze is not measured in our paradigm (this would require a gaze calibration, involving...the dots were also drifting to the right. Gaze horizontal position is plotted along the y-axis. The red bar indicates a visual nystagmus event...for automated 5 Reflex Stimulus Functions Visual Nystagmus luminance grating low-level motion equiluminant grating color vision contrast gratings at 3
Harrison, Neil R; Witheridge, Sian; Makin, Alexis; Wuerger, Sophie M; Pegna, Alan J; Meyer, Georg F
2015-11-01
Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset. Copyright © 2015 Elsevier Ltd. All rights reserved.
Improvised Nuclear Device Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buddemeier, Brooke; Suski, Nancy
2011-07-12
Reducing the casualties of catastrophic terrorist attacks requires an understanding of weapons of mass destruction (WMD) effects, infrastructure damage, atmospheric dispersion, and health effects. The Federal Planning Guidance for Response to a Nuclear Detonation provides the strategy for response to an improvised nuclear device (IND) detonation. The supporting science developed by national laboratories and other technical organizations for this document significantly improves our understanding of the hazards posed by such an event. Detailed fallout predictions from the advanced suite of three-dimensional meteorology and plume/fallout models developed at Lawrence Livermore National Laboratory, including extensive global geographical and real-time meteorological databases tomore » support model calculations, are a key part of response planning. This presentation describes the methodology and results to date, including visualization aids developed for response organizations. These products have greatly enhanced the community planning process through first-person points of view and description of the dynamic nature of the event.« less
Delorme, Arnaud; Miyakoshi, Makoto; Jung, Tzyy-Ping; Makeig, Scott
2014-01-01
With the advent of modern computing methods, modeling trial-to-trial variability in biophysical recordings including electroencephalography (EEG) has become of increasingly interest. Yet no widely used method exists for comparing variability in ordered collections of single-trial data epochs across conditions and subjects. We have developed a method based on an ERP-image visualization tool in which potential, spectral power, or some other measure at each time point in a set of event-related single-trial data epochs are represented as color coded horizontal lines that are then stacked to form a 2-D colored image. Moving-window smoothing across trial epochs can make otherwise hidden event-related features in the data more perceptible. Stacking trials in different orders, for example ordered by subject reaction time, by context-related information such as inter-stimulus interval, or some other characteristic of the data (e.g., latency-window mean power or phase of some EEG source) can reveal aspects of the multifold complexities of trial-to-trial EEG data variability. This study demonstrates new methods for computing and visualizing grand ERP-image plots across subjects and for performing robust statistical testing on the resulting images. These methods have been implemented and made freely available in the EEGLAB signal-processing environment that we maintain and distribute. PMID:25447029
Flom, Ross; Johnson, Sarah
2011-03-01
Between 12- and 14 months of age infants begin to use another's direction of gaze and affective expression in learning about various objects and events. What is not well understood is how long infants' behaviour towards a previously unfamiliar object continues to be influenced following their participation in circumstances of social referencing. In this experiment, we examined infants' sensitivity to an adult's direction of gaze and their visual preference for one of two objects following a 5-min, 1-day, or 1-month delay. Ninety-six 12-month-olds participated. For half of the infants during habituation (i.e., familiarization), the adults' direction of gaze was directed towards an unfamiliar object (look condition). For the remaining half of the infants during habituation, the adults' direction of gaze was directed away from the unfamiliar object (look-away condition). All infants were habituated to two events. One event consisted of an adult looking towards (look condition) or away from (look-away condition) an object while facially and vocally conveying a positive affective expression. The second event consisted of the same adult looking towards or away from a different object while conveying a disgusted affective expression. Following the habituation phase and a 5-min, 1-day, or 1-month delay, infants' visual preference was assessed. During the visual preference phase, infants saw the two objects side by side where the adult conveying the affective expression was not visible. Results of the visual preference phase indicate that infants in the look condition showed a significant preference for object previously paired with the positive affect following a 5-min and 1-day delay. No significant visual preference was found in the look condition following a 1-month delay. No significant preferences were found at any retention interval in the look-away condition. Results are discussed in terms of early learning, social referencing, and early memory. ©2010 The British Psychological Society.
Comstock, Timothy L; Paterno, Michael R; Singh, Angele; Erb, Tara; Davis, Elizabeth
2011-01-01
To compare the safety and efficacy of loteprednol etabonate ophthalmic ointment 0.5% (LE ointment), a new topical ointment formulation, with vehicle for the treatment of inflammation and pain following cataract surgery. Two randomized, multicenter, double-masked, parallel-group, vehicle-controlled studies were conducted. Patients aged ≥18 years with a combined postoperative anterior chamber cells and flare (ACI) ≥ Grade 3 following uncomplicated cataract surgery participated in seven study visits. Patients self-administered either topical LE ointment or vehicle four times daily for 14 days. Efficacy outcomes included the proportion of patients with complete resolution of ACI and the proportion of patients with no (Grade 0) pain at postoperative day 8. Safety outcomes included the incidence of adverse events, ocular symptoms, changes in intraocular pressure and visual acuity, and biomicroscopy and funduscopy findings. Data from the two studies were combined. The integrated intent-to-treat population consisted of 805 patients (mean [standard deviation] age 69.0 [9.2] years; 58.0% female and 89.7% white). Significantly more LE ointment-treated patients than vehicle-treated patients had complete resolution of ACI (27.7% versus 12.5%) and no pain (75.5% versus 43.1%) at day 8 (P < 0.0001 for both). Fewer LE ointment-treated patients required rescue medication (27.7% versus 63.8%), and fewer had an ocular adverse event (47.2% versus 78.0%, P < 0.0001) while on study treatment. The most common ocular adverse events with LE ointment were anterior chamber inflammation, photophobia, corneal edema, conjunctival hyperemia, eye pain, and iritis. Mean intraocular pressure decreased in both treatment groups. Four patients had increased intraocular pressure ≥10 mmHg (three LE ointment and one vehicle) prior to rescue medication. Visual acuity and dilated funduscopy results were similar between the treatment groups, with the exception of visual acuity at visits 5 and 6, which favored LE ointment. LE ointment was efficacious and well tolerated in the treatment of ocular inflammation and pain following cataract surgery.
Maurer, Urs; Zevin, Jason D.; McCandliss, Bruce D.
2015-01-01
The N170 component of the event-related potential (ERP) reflects experience-dependent neural changes in several forms of visual expertise, including expertise for visual words. Readers skilled in writing systems that link characters to phonemes (i.e., alphabetic writing) typically produce a left-lateralized N170 to visual word forms. This study examined the N170 in three Japanese scripts that link characters to larger phonological units. Participants were monolingual English speakers (EL1) and native Japanese speakers (JL1) who were also proficient in English. ERPs were collected using a 129-channel array, as participants performed a series of experiments viewing words or novel control stimuli in a repetition detection task. The N170 was strongly left-lateralized for all three Japanese scripts (including logographic Kanji characters) in JL1 participants, but bilateral in EL1 participants viewing these same stimuli. This demonstrates that left-lateralization of the N170 is dependent on specific reading expertise and is not limited to alphabetic scripts. Additional contrasts within the moraic Katakana script revealed equivalent N170 responses in JL1 speakers for familiar Katakana words and for Kanji words transcribed into novel Katakana words, suggesting that the N170 expertise effect is driven by script familiarity rather than familiarity with particular visual word forms. Finally, for English words and novel symbol string stimuli, both EL1 and JL1 subjects produced equivalent responses for the novel symbols, and more left-lateralized N170 responses for the English words, indicating that such effects are not limited to the first language. Taken together, these cross-linguistic results suggest that similar neural processes underlie visual expertise for print in very different writing systems. PMID:18370600
2013-01-01
Background Event-related brain potentials (ERPs) were used to investigate training-related changes in fast visual word recognition of functionally illiterate adults. Analyses focused on the left-lateralized occipito-temporal N170, which represents the earliest processing of visual word forms. Event-related brain potentials were recorded from 20 functional illiterates receiving intensive literacy training for adults, 10 functional illiterates not participating in the training and 14 regular readers while they read words, pseudowords or viewed symbol strings. Subjects were required to press a button whenever a stimulus was immediately repeated. Results Attending intensive literacy training was associated with improvements in reading and writing skills and with an increase of the word-related N170 amplitude. For untrained functional illiterates and regular readers no changes in literacy skills or N170 amplitude were observed. Conclusions Results of the present study suggest that the word-related N170 can still be modulated in adulthood as a result of the improvements in literacy skills. PMID:24330622
Social Media Visual Analytics for Events
NASA Astrophysics Data System (ADS)
Diakopoulos, Nicholas; Naaman, Mor; Yazdani, Tayebeh; Kivran-Swaine, Funda
For large-scale multimedia events such as televised debates and speeches, the amount of content on social media channels such as Facebook or Twitter can easily become overwhelming, yet still contain information that may aid and augment understanding of the multimedia content via individual social media items, or aggregate information from the crowd's response. In this work we discuss this opportunity in the context of a social media visual analytic tool, Vox Civitas, designed to help journalists, media professionals, or other researchers make sense of large-scale aggregations of social media content around multimedia broadcast events. We discuss the design of the tool, present and evaluate the text analysis techniques used to enable the presentation, and detail the visual and interaction design. We provide an exploratory evaluation based on a user study in which journalists interacted with the system to analyze and report on a dataset of over one 100 000 Twitter messages collected during the broadcast of the U.S. State of the Union presidential address in 2010.
Boltzmann, Melanie; Rüsseler, Jascha
2013-12-13
Event-related brain potentials (ERPs) were used to investigate training-related changes in fast visual word recognition of functionally illiterate adults. Analyses focused on the left-lateralized occipito-temporal N170, which represents the earliest processing of visual word forms. Event-related brain potentials were recorded from 20 functional illiterates receiving intensive literacy training for adults, 10 functional illiterates not participating in the training and 14 regular readers while they read words, pseudowords or viewed symbol strings. Subjects were required to press a button whenever a stimulus was immediately repeated. Attending intensive literacy training was associated with improvements in reading and writing skills and with an increase of the word-related N170 amplitude. For untrained functional illiterates and regular readers no changes in literacy skills or N170 amplitude were observed. Results of the present study suggest that the word-related N170 can still be modulated in adulthood as a result of the improvements in literacy skills.
Detection of visual events along the apparent motion trace in patients with paranoid schizophrenia.
Sanders, Lia Lira Olivier; Muckli, Lars; de Millas, Walter; Lautenschlager, Marion; Heinz, Andreas; Kathmann, Norbert; Sterzer, Philipp
2012-07-30
Dysfunctional prediction in sensory processing has been suggested as a possible causal mechanism in the development of delusions in patients with schizophrenia. Previous studies in healthy subjects have shown that while the perception of apparent motion can mask visual events along the illusory motion trace, such motion masking is reduced when events are spatio-temporally compatible with the illusion, and, therefore, predictable. Here we tested the hypothesis that this specific detection advantage for predictable target stimuli on the apparent motion trace is reduced in patients with paranoid schizophrenia. Our data show that, although target detection along the illusory motion trace is generally impaired, both patients and healthy control participants detect predictable targets more often than unpredictable targets. Patients had a stronger motion masking effect when compared to controls. However, patients showed the same advantage in the detection of predictable targets as healthy control subjects. Our findings reveal stronger motion masking but intact prediction of visual events along the apparent motion trace in patients with paranoid schizophrenia and suggest that the sensory prediction mechanism underlying apparent motion is not impaired in paranoid schizophrenia. Copyright © 2012. Published by Elsevier Ireland Ltd.
Zator, Krysten; Katz, Albert N
2017-07-01
Here, we examined linguistic differences in the reports of memories produced by three cueing methods. Two groups of young adults were cued visually either by words representing events or popular cultural phenomena that took place when they were 5, 10, or 16 years of age, or by words referencing a general lifetime period word cue directing them to that period in their life. A third group heard 30-second long musical clips of songs popular during the same three time periods. In each condition, participants typed a specific event memory evoked by the cue and these typed memories were subjected to analysis by the Linguistic Inquiry and Word Count (LIWC) program. Differences in the reports produced indicated that listening to music evoked memories embodied in motor-perceptual systems more so than memories evoked by our word-cueing conditions. Additionally, relative to music cues, lifetime period word cues produced memories with reliably more uses of personal pronouns, past tense terms, and negative emotions. The findings provide evidence for the embodiment of autobiographical memories, and how those differ when the cues emphasise different aspects of the encoded events.
Why do Cross-Flow Turbines Stall?
NASA Astrophysics Data System (ADS)
Cavagnaro, Robert; Strom, Benjamin; Polagye, Brian
2015-11-01
Hydrokinetic turbines are prone to instability and stall near their peak operating points under torque control. Understanding the physics of turbine stall may help to mitigate this undesirable occurrence and improve the robustness of torque controllers. A laboratory-scale two-bladed cross-flow turbine operating at a chord-based Reynolds number ~ 3 ×104 is shown to stall at a critical tip-speed ratio. Experiments are conducting bringing the turbine to this critical speed in a recirculating current flume by increasing resistive torque and allowing the rotor to rapidly decelerate while monitoring inflow velocity, torque, and drag. The turbine stalls probabilistically with a distribution generated from hundreds of such events. A machine learning algorithm identifies stall events and indicates the effectiveness of available measurements or combinations of measurements as predictors. Bubble flow visualization and PIV are utilized to observe fluid conditions during stall events including the formation, separation, and advection of leading-edge vortices involved in the stall process.
National Cancer Institute News
... Workshop NCI Annual Fact Book NCI Visuals Online Social Media @NCIMedia NCI YouTube Subscribe to NCI News Releases ... posts Subscribe Events Scientific Meetings and Lectures Conferences Social Media Events News Archive 2018 2017 2016 2015 2014 ...
The Science of Serious Gaming: Exploring the Benefits of Science-Based Games in the Classroom
NASA Astrophysics Data System (ADS)
Kurtz, N.
2016-02-01
Finding ways to connect scientists with the classroom is an important part of sharing enthusiasm for science with the public. Utilizing the visual arts and serious gaming techniques has benefits for all participants including the engagement of multiple learning sectors and the involvement of whole-brain teaching methods. The activities in this presentation draw from real-world events that require higher level thinking strategies to discover and differential naturally occurring patterns.
Perception and control of rotorcraft flight
NASA Technical Reports Server (NTRS)
Owen, Dean H.
1991-01-01
Three topics which can be applied to rotorcraft flight are examined: (1) the nature of visual information; (2) what visual information is informative about; and (3) the control of visual information. The anchorage of visual perception is defined as the distribution of structure in the surrounding optical array or the distribution of optical structure over the retinal surface. A debate was provoked about whether the referent of visual event perception, and in turn control, is optical motion, kinetics, or dynamics. The interface of control theory and visual perception is also considered. The relationships among these problems is the basis of this article.
Health coaching for glaucoma care: a pilot study using mixed methods
Vin, Anita; Schneider, Suzanne; Muir, Kelly W; Rosdahl, Jullia A
2015-01-01
Introduction Adherence to glaucoma medications is essential for successful treatment of the disease but is complex and difficult for many of our patients. Health coaching has been used successfully in the treatment of other chronic diseases. This pilot study explores the use of health coaching for glaucoma care. Methods A mixed methods study design was used to assess the health coaching intervention for glaucoma patients. The health coaching intervention consisted of four to six health coaching sessions with a certified health coach via telephone. Quantitative measures included demographic and health information, adherence to glaucoma medications (using the visual analog adherence scale and medication event monitoring system), and an exit survey rating the experience. Qualitative measures included a precoaching health questionnaire, notes made by the coach during the intervention, and an exit interview with the subjects at the end of the study. Results Four glaucoma patients participated in the study; all derived benefits from the health coaching. Study subjects demonstrated increased glaucoma drop adherence in response to the coaching intervention, in both visual analog scale and medication event monitoring system. Study subjects’ qualitative feedback reflected a perceived improvement in both eye and general health self-care. The subjects stated that they would recommend health coaching to friends or family members. Conclusion Health coaching was helpful to the glaucoma patients in this study; it has the potential to improve glaucoma care and overall health. PMID:26604666
NASA Astrophysics Data System (ADS)
Gorbunov, Michael E.; Cardellach, Estel; Lauritsen, Kent B.
2018-03-01
Linear and non-linear representations of wave fields constitute the basis of modern algorithms for analysis of radio occultation (RO) data. Linear representations are implemented by Fourier Integral Operators, which allow for high-resolution retrieval of bending angles. Non-linear representations include Wigner Distribution Function (WDF), which equals the pseudo-density of energy in the ray space. Representations allow for filtering wave fields by suppressing some areas of the ray space and mapping the field back from the transformed space to the initial one. We apply this technique to the retrieval of reflected rays from RO observations. The use of reflected rays may increase the accuracy of the retrieval of the atmospheric refractivity. Reflected rays can be identified by the visual inspection of WDF or spectrogram plots. Numerous examples from COSMIC data indicate that reflections are mostly observed over oceans or snow, in particular over Antarctica. We introduce the reflection index that characterizes the relative intensity of the reflected ray with respect to the direct ray. The index allows for the automatic identification of events with reflections. We use the radio holographic estimate of the errors of the retrieved bending angle profiles of reflected rays. A comparison of indices evaluated for a large base of events including the visual identification of reflections indicated a good agreement with our definition of reflection index.
NASA Technical Reports Server (NTRS)
1998-01-01
Final preparations for lift off of the DELTA II Mars Pathfinder Rocket are shown. Activities include loading the liquid oxygen, completing the construction of the Rover, and placing the Rover into the Lander. After the countdown, important visual events include the launch of the Delta Rocket, burnout and separation of the three Solid Rocket Boosters, and the main engine cutoff. The cutoff of the main engine marks the beginning of the second stage engine. After the completion of the second stage, the third stage engine ignites and then cuts off. Once the third stage engine cuts off spacecraft separation occurs.
Sex Differences during Visual Scanning of Occlusion Events in Infants
ERIC Educational Resources Information Center
Wilcox, Teresa; Alexander, Gerianne M.; Wheeler, Lesley; Norvell, Jennifer M.
2012-01-01
A growing number of sex differences in infancy have been reported. One task on which they have been observed reliably is the event-mapping task. In event mapping, infants view an occlusion event involving 1 or 2 objects, the occluder is removed, and then infants see 1 object. Typically, boys are more likely than girls to detect an inconsistency…
Singh, Monika; Bhoge, Rajesh K; Randhawa, Gurinderjit
2018-04-20
Background : Confirming the integrity of seed samples in powdered form is important priorto conducting a genetically modified organism (GMO) test. Rapid onsite methods may provide a technological solution to check for genetically modified (GM) events at ports of entry. In India, Bt cotton is the commercialized GM crop with four approved GM events; however, 59 GM events have been approved globally. GMO screening is required to test for authorized GM events. The identity and amplifiability of test samples could be ensured first by employing endogenous genes as an internal control. Objective : A rapid onsite detection method was developed for an endogenous reference gene, stearoyl acyl carrier protein desaturase ( Sad1 ) of cotton, employing visual and real-time loop-mediated isothermal amplification (LAMP). Methods : The assays were performed at a constant temperature of 63°C for 30 min for visual LAMP and 62ºC for 40 min for real-time LAMP. Positive amplification was visualized as a change in color from orange to green on addition of SYBR ® Green or detected as real-time amplification curves. Results : Specificity of LAMP assays was confirmed using a set of 10 samples. LOD for visual LAMP was up to 0.1%, detecting 40 target copies, and for real-time LAMP up to 0.05%, detecting 20 target copies. Conclusions : The developed methods could be utilized to confirm the integrity of seed powder prior to conducting a GMO test for specific GM events of cotton. Highlights : LAMP assays for the endogenous Sad1 gene of cotton have been developed to be used as an internal control for onsite GMO testing in cotton.
Innovative Visualization Techniques applied to a Flood Scenario
NASA Astrophysics Data System (ADS)
Falcão, António; Ho, Quan; Lopes, Pedro; Malamud, Bruce D.; Ribeiro, Rita; Jern, Mikael
2013-04-01
The large and ever-increasing amounts of multi-dimensional, time-varying and geospatial digital information from multiple sources represent a major challenge for today's analysts. We present a set of visualization techniques that can be used for the interactive analysis of geo-referenced and time sampled data sets, providing an integrated mechanism and that aids the user to collaboratively explore, present and communicate visually complex and dynamic data. Here we present these concepts in the context of a 4 hour flood scenario from Lisbon in 2010, with data that includes measures of water column (flood height) every 10 minutes at a 4.5 m x 4.5 m resolution, topography, building damage, building information, and online base maps. Techniques we use include web-based linked views, multiple charts, map layers and storytelling. We explain two of these in more detail that are not currently in common use for visualization of data: storytelling and web-based linked views. Visual storytelling is a method for providing a guided but interactive process of visualizing data, allowing more engaging data exploration through interactive web-enabled visualizations. Within storytelling, a snapshot mechanism helps the author of a story to highlight data views of particular interest and subsequently share or guide others within the data analysis process. This allows a particular person to select relevant attributes for a snapshot, such as highlighted regions for comparisons, time step, class values for colour legend, etc. and provide a snapshot of the current application state, which can then be provided as a hyperlink and recreated by someone else. Since data can be embedded within this snapshot, it is possible to interactively visualize and manipulate it. The second technique, web-based linked views, includes multiple windows which interactively respond to the user selections, so that when selecting an object and changing it one window, it will automatically update in all the other windows. These concepts can be part of a collaborative platform, where multiple people share and work together on the data, via online access, which also allows its remote usage from a mobile platform. Storytelling augments analysis and decision-making capabilities allowing to assimilate complex situations and reach informed decisions, in addition to helping the public visualize information. In our visualization scenario, developed in the context of the VA-4D project for the European Space Agency (see http://www.ca3-uninova.org/project_va4d), we make use of the GAV (GeoAnalytics Visualization) framework, a web-oriented visual analytics application based on multiple interactive views. The final visualization that we produce includes multiple interactive views, including a dynamic multi-layer map surrounded by other visualizations such as bar charts, time graphs and scatter plots. The map provides flood and building information, on top of a base city map (street maps and/or satellite imagery provided by online map services such as Google Maps, Bing Maps etc.). Damage over time for selected buildings, damage for all buildings at a chosen time period, correlation between damage and water depth can be analysed in the other views. This interactive web-based visualization that incorporates the ideas of storytelling, web-based linked views, and other visualization techniques, for a 4 hour flood event in Lisbon in 2010, can be found online at http://www.ncomva.se/flash/projects/esa/flooding/.
Implacable images: why epileptiform events continue to be featured in film and television.
Kerson, Toba Schwaber; Kerson, Lawrence A
2006-06-01
Epileptiform events have been portrayed in film since 1900 and on television since the 1950's. Over time, portrayals have not reflected medicine's understanding of epilepsy. At present, it is unlikely that individuals who do not have a close relationship with someone with a seizure-disorder will witness a seizure. Because fictive and often incorrect images appear increasingly, many think of them as accurate depictions. The research addresses three questions in relation to these images: How do directors use the images? Why do uses of seizures in visual media not reflect contemporary scientific knowledge? Why have they persisted and increased in use? Data consist of material from 192 films and television episodes. The general category of seizures includes seizures in characters said to have epilepsy or some other condition, seizures related to drug or alcohol use, pseudoseizures and feigned seizures, and, a category in which, for example, someone is described as "having a fit." The research demonstrates how epileptiform events drive the narrative, support the genre, evoke specific emotional reactions, accentuate traits of characters with seizures, highlight qualities of other characters through their responses to the seizures, act as catalysts for actions, and enhance the voyeuristic experience of the audience. Twenty video sequences are included in the manuscript. The authors conclude that the visual experience of seizures remains so enthralling that its use is most likely to increase particularly on television, and that as the public has less experience with real seizures, depictions in film will continue to be more concerned with what the image can do for the show and less interested in accurate portrayals. Ways to influence depictions are suggested.
Analysis, Mining and Visualization Service at NCSA
NASA Astrophysics Data System (ADS)
Wilhelmson, R.; Cox, D.; Welge, M.
2004-12-01
NCSA's goal is to create a balanced system that fully supports high-end computing as well as: 1) high-end data management and analysis; 2) visualization of massive, highly complex data collections; 3) large databases; 4) geographically distributed Grid computing; and 5) collaboratories, all based on a secure computational environment and driven with workflow-based services. To this end NCSA has defined a new technology path that includes the integration and provision of cyberservices in support of data analysis, mining, and visualization. NCSA has begun to develop and apply a data mining system-NCSA Data-to-Knowledge (D2K)-in conjunction with both the application and research communities. NCSA D2K will enable the formation of model-based application workflows and visual programming interfaces for rapid data analysis. The Java-based D2K framework, which integrates analytical data mining methods with data management, data transformation, and information visualization tools, will be configurable from the cyberservices (web and grid services, tools, ..) viewpoint to solve a wide range of important data mining problems. This effort will use modules, such as a new classification methods for the detection of high-risk geoscience events, and existing D2K data management, machine learning, and information visualization modules. A D2K cyberservices interface will be developed to seamlessly connect client applications with remote back-end D2K servers, providing computational resources for data mining and integration with local or remote data stores. This work is being coordinated with SDSC's data and services efforts. The new NCSA Visualization embedded workflow environment (NVIEW) will be integrated with D2K functionality to tightly couple informatics and scientific visualization with the data analysis and management services. Visualization services will access and filter disparate data sources, simplifying tasks such as fusing related data from distinct sources into a coherent visual representation. This approach enables collaboration among geographically dispersed researchers via portals and front-end clients, and the coupling with data management services enables recording associations among datasets and building annotation systems into visualization tools and portals, giving scientists a persistent, shareable, virtual lab notebook. To facilitate provision of these cyberservices to the national community, NCSA will be providing a computational environment for large-scale data assimilation, analysis, mining, and visualization. This will be initially implemented on the new 512 processor shared memory SGI's recently purchased by NCSA. In addition to standard batch capabilities, NCSA will provide on-demand capabilities for those projects requiring rapid response (e.g., development of severe weather, earthquake events) for decision makers. It will also be used for non-sequential interactive analysis of data sets where it is important have access to large data volumes over space and time.
Comparison of event landslide inventories: the Pogliaschina catchment test case, Italy
NASA Astrophysics Data System (ADS)
Mondini, A. C.; Viero, A.; Cavalli, M.; Marchi, L.; Herrera, G.; Guzzetti, F.
2014-07-01
Event landslide inventory maps document the extent of populations of landslides caused by a single natural trigger, such as an earthquake, an intense rainfall event, or a rapid snowmelt event. Event inventory maps are important for landslide susceptibility and hazard modelling, and prove useful to manage residual risk after a landslide-triggering event. Standards for the preparation of event landslide inventory maps are lacking. Traditional methods are based on the visual interpretation of stereoscopic aerial photography, aided by field surveys. New and emerging techniques exploit remotely sensed data and semi-automatic algorithms. We describe the production and comparison of two independent event inventories prepared for the Pogliaschina catchment, Liguria, Northwest Italy. The two inventories show landslides triggered by an intense rainfall event on 25 October 2011, and were prepared through the visual interpretation of digital aerial photographs taken 3 days and 33 days after the event, and by processing a very-high-resolution image taken by the WorldView-2 satellite 4 days after the event. We compare the two inventories qualitatively and quantitatively using established and new metrics, and we discuss reasons for the differences between the two landslide maps. We expect that the results of our work can help in deciding on the most appropriate method to prepare reliable event inventory maps, and outline the advantages and the limitations of the different approaches.
Causal structures in inflation
NASA Astrophysics Data System (ADS)
Ellis, George F. R.; Uzan, Jean-Philippe
2015-12-01
This article reviews the properties and limitations associated with the existence of particle, visual, and event horizons in cosmology in general and in inflationary universes in particular, carefully distinguishing them from 'Hubble horizons'. It explores to what extent one might be able to probe conditions beyond the visual horizon (which is close in size to the present Hubble radius), thereby showing that visual horizons place major limits on what are observationally testable aspects of a multiverse, if such exists. Indeed these limits largely prevent us from observationally proving a multiverse either does or does not exist. We emphasize that event horizons play no role at all in observational cosmology, even in the multiverse context, despite some claims to the contrary in the literature. xml:lang="fr"
Tackling action-based video abstraction of animated movies for video browsing
NASA Astrophysics Data System (ADS)
Ionescu, Bogdan; Ott, Laurent; Lambert, Patrick; Coquin, Didier; Pacureanu, Alexandra; Buzuloiu, Vasile
2010-07-01
We address the issue of producing automatic video abstracts in the context of the video indexing of animated movies. For a quick browse of a movie's visual content, we propose a storyboard-like summary, which follows the movie's events by retaining one key frame for each specific scene. To capture the shot's visual activity, we use histograms of cumulative interframe distances, and the key frames are selected according to the distribution of the histogram's modes. For a preview of the movie's exciting action parts, we propose a trailer-like video highlight, whose aim is to show only the most interesting parts of the movie. Our method is based on a relatively standard approach, i.e., highlighting action through the analysis of the movie's rhythm and visual activity information. To suit every type of movie content, including predominantly static movies or movies without exciting parts, the concept of action depends on the movie's average rhythm. The efficiency of our approach is confirmed through several end-user studies.
Real-Time Visualization Tool Integrating STEREO, ACE, SOHO and the SDO
NASA Astrophysics Data System (ADS)
Schroeder, P. C.; Luhmann, J. G.; Marchant, W.
2011-12-01
The STEREO/IMPACT team has developed a new web-based visualization tool for near real-time data from the STEREO instruments, ACE and SOHO as well as relevant models of solar activity. This site integrates images, solar energetic particle, solar wind plasma and magnetic field measurements in an intuitive way using near real-time products from NOAA and other sources to give an overview of recent space weather events. This site enhances the browse tools already available at UC Berkeley, UCLA and Caltech which allow users to visualize similar data from the start of the STEREO mission. Our new near real-time tool utilizes publicly available real-time data products from a number of missions and instruments, including SOHO LASCO C2 images from the SOHO team's NASA site, SDO AIA images from the SDO team's NASA site, STEREO IMPACT SEP data plots and ACE EPAM data plots from the NOAA Space Weather Prediction Center and STEREO spacecraft positions from the STEREO Science Center.
Evaluation of Visual Field Progression in Glaucoma: Quasar Regression Program and Event Analysis.
Díaz-Alemán, Valentín T; González-Hernández, Marta; Perera-Sanz, Daniel; Armas-Domínguez, Karintia
2016-01-01
To determine the sensitivity, specificity and agreement between the Quasar program, glaucoma progression analysis (GPA II) event analysis and expert opinion in the detection of glaucomatous progression. The Quasar program is based on linear regression analysis of both mean defect (MD) and pattern standard deviation (PSD). Each series of visual fields was evaluated by three methods; Quasar, GPA II and four experts. The sensitivity, specificity and agreement (kappa) for each method was calculated, using expert opinion as the reference standard. The study included 439 SITA Standard visual fields of 56 eyes of 42 patients, with a mean of 7.8 ± 0.8 visual fields per eye. When suspected cases of progression were considered stable, sensitivity and specificity of Quasar, GPA II and the experts were 86.6% and 70.7%, 26.6% and 95.1%, and 86.6% and 92.6% respectively. When suspected cases of progression were considered as progressing, sensitivity and specificity of Quasar, GPA II and the experts were 79.1% and 81.2%, 45.8% and 90.6%, and 85.4% and 90.6% respectively. The agreement between Quasar and GPA II when suspected cases were considered stable or progressing was 0.03 and 0.28 respectively. The degree of agreement between Quasar and the experts when suspected cases were considered stable or progressing was 0.472 and 0.507. The degree of agreement between GPA II and the experts when suspected cases were considered stable or progressing was 0.262 and 0.342. The combination of MD and PSD regression analysis in the Quasar program showed better agreement with the experts and higher sensitivity than GPA II.
What can neuromorphic event-driven precise timing add to spike-based pattern recognition?
Akolkar, Himanshu; Meyer, Cedric; Clady, Zavier; Marre, Olivier; Bartolozzi, Chiara; Panzeri, Stefano; Benosman, Ryad
2015-03-01
This letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms. The concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems. The use of images naturally leads to generating incorrect artificial and redundant spike timings and, more important, also contradicts biological findings indicating that visual processing is massively parallel, asynchronous with high temporal resolution. A new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors. Unlike conventional cameras, these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input, resulting optimally sparse in space and time-pixel individually and precisely timed only if new, (previously unknown) information is available (event based). This letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition frequencies (30-60 Hz). The use of information theory to characterize separability between classes for each temporal resolution shows that high temporal acquisition provides up to 70% more information that conventional spikes generated from frame-based acquisition as used in standard artificial vision, thus drastically increasing the separability between classes of objects. Experiments on real data show that the amount of information loss is correlated with temporal precision. Our information-theoretic study highlights the potentials of neuromorphic asynchronous visual sensors for both practical applications and theoretical investigations. Moreover, it suggests that representing visual information as a precise sequence of spike times as reported in the retina offers considerable advantages for neuro-inspired visual computations.
Ancestral gene reconstruction and synthesis of ancient rhodopsins in the laboratory.
Chang, Belinda S W
2003-08-01
Laboratory synthesis of ancestral proteins offers an intriguing opportunity to study the past directly. The development of Bayesian methods to infer ancestral sequences, combined with advances in models of molecular evolution, and synthetic gene technology make this an increasingly promising approach in evolutionary studies of molecular function. Visual pigments form the first step in the biochemical cascade of events in the retina in all animals known to possess visual capabilities. In vertebrates, the necessity of spanning a dynamic range of light intensities of many orders of magnitude has given rise to two different types of photoreceptors, rods specialized for dim-light conditions, and cones for daylight and color vision. These photoreceptors contain different types of visual pigment genes. Reviewed here are methods of inferring ancestral sequences, chemical synthesis of artificial ancestral genes in the laboratory, and applications to the evolution of vertebrate visual systems and the experimental recreation of an archosaur rod visual pigment. The ancestral archosaurs gave rise to several notable lineages of diapsid reptiles, including the birds and the dinosaurs, and would have existed over 200 MYA. What little is known of their physiology comes from fossil remains, and inference based on the biology of their living descendants. Despite its age, an ancestral archosaur pigment was successfully recreated in the lab, and showed interesting properties of its wavelength sensitivity that may have implications for the visual capabilities of the ancestral archosaurs in dim light.
Visual Salience in the Change Detection Paradigm: The Special Role of Object Onset
ERIC Educational Resources Information Center
Cole, Geoff G.; Kentridge, Robert W.; Heywood, Charles A.
2004-01-01
The relative efficacy with which appearance of a new object orients visual attention was investigated. At issue is whether the visual system treats onset as being of particular importance or only 1 of a number of stimulus events equally likely to summon attention. Using the 1-shot change detection paradigm, the authors compared detectability of…
ERIC Educational Resources Information Center
Zhao, Pei; Zhao, Jing; Weng, Xuchu; Li, Su
2018-01-01
Visual word N170 is an index of perceptual expertise for visual words across different writing systems. Recent developmental studies have shown the early emergence of visual word N170 and its close association with individual's reading ability. In the current study, we investigated whether fine-tuning N170 for Chinese characters could emerge after…
Responses to Targets in the Visual Periphery in Deaf and Normal-Hearing Adults
ERIC Educational Resources Information Center
Rothpletz, Ann M.; Ashmead, Daniel H.; Tharpe, Anne Marie
2003-01-01
The purpose of this study was to compare the response times of deaf and normal-hearing individuals to the onset of target events in the visual periphery in distracting and nondistracting conditions. Visual reaction times to peripheral targets placed at 3 eccentricities to the left and right of a center fixation point were measured in prelingually…
Innes-Brown, Hamish; Barutchu, Ayla; Crewther, David P.
2013-01-01
The effect of multi-modal vs uni-modal prior stimuli on the subsequent processing of a simple flash stimulus was studied in the context of the audio-visual ‘flash-beep’ illusion, in which the number of flashes a person sees is influenced by accompanying beep stimuli. EEG recordings were made while combinations of simple visual and audio-visual stimuli were presented. The experiments found that the electric field strength related to a flash stimulus was stronger when it was preceded by a multi-modal flash/beep stimulus, compared to when it was preceded by another uni-modal flash stimulus. This difference was found to be significant in two distinct timeframes – an early timeframe, from 130–160 ms, and a late timeframe, from 300–320 ms. Source localisation analysis found that the increased activity in the early interval was localised to an area centred on the inferior and superior parietal lobes, whereas the later increase was associated with stronger activity in an area centred on primary and secondary visual cortex, in the occipital lobe. The results suggest that processing of a visual stimulus can be affected by the presence of an immediately prior multisensory event. Relatively long-lasting interactions generated by the initial auditory and visual stimuli altered the processing of a subsequent visual stimulus. PMID:24391939
Stewart, C M; Newlands, S D; Perachio, A A
2004-12-01
Rapid and accurate discrimination of single units from extracellular recordings is a fundamental process for the analysis and interpretation of electrophysiological recordings. We present an algorithm that performs detection, characterization, discrimination, and analysis of action potentials from extracellular recording sessions. The program was entirely written in LabVIEW (National Instruments), and requires no external hardware devices or a priori information about action potential shapes. Waveform events are detected by scanning the digital record for voltages that exceed a user-adjustable trigger. Detected events are characterized to determine nine different time and voltage levels for each event. Various algebraic combinations of these waveform features are used as axis choices for 2-D Cartesian plots of events. The user selects axis choices that generate distinct clusters. Multiple clusters may be defined as action potentials by manually generating boundaries of arbitrary shape. Events defined as action potentials are validated by visual inspection of overlain waveforms. Stimulus-response relationships may be identified by selecting any recorded channel for comparison to continuous and average cycle histograms of binned unit data. The algorithm includes novel aspects of feature analysis and acquisition, including higher acquisition rates for electrophysiological data compared to other channels. The program confirms that electrophysiological data may be discriminated with high-speed and efficiency using algebraic combinations of waveform features derived from high-speed digital records.
Family genome browser: visualizing genomes with pedigree information.
Juan, Liran; Liu, Yongzhuang; Wang, Yongtian; Teng, Mingxiang; Zang, Tianyi; Wang, Yadong
2015-07-15
Families with inherited diseases are widely used in Mendelian/complex disease studies. Owing to the advances in high-throughput sequencing technologies, family genome sequencing becomes more and more prevalent. Visualizing family genomes can greatly facilitate human genetics studies and personalized medicine. However, due to the complex genetic relationships and high similarities among genomes of consanguineous family members, family genomes are difficult to be visualized in traditional genome visualization framework. How to visualize the family genome variants and their functions with integrated pedigree information remains a critical challenge. We developed the Family Genome Browser (FGB) to provide comprehensive analysis and visualization for family genomes. The FGB can visualize family genomes in both individual level and variant level effectively, through integrating genome data with pedigree information. Family genome analysis, including determination of parental origin of the variants, detection of de novo mutations, identification of potential recombination events and identical-by-decent segments, etc., can be performed flexibly. Diverse annotations for the family genome variants, such as dbSNP memberships, linkage disequilibriums, genes, variant effects, potential phenotypes, etc., are illustrated as well. Moreover, the FGB can automatically search de novo mutations and compound heterozygous variants for a selected individual, and guide investigators to find high-risk genes with flexible navigation options. These features enable users to investigate and understand family genomes intuitively and systematically. The FGB is available at http://mlg.hit.edu.cn/FGB/. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Gravity influences top-down signals in visual processing.
Cheron, Guy; Leroy, Axelle; Palmero-Soler, Ernesto; De Saedeleer, Caty; Bengoetxea, Ana; Cebolla, Ana-Maria; Vidal, Manuel; Dan, Bernard; Berthoz, Alain; McIntyre, Joseph
2014-01-01
Visual perception is not only based on incoming visual signals but also on information about a multimodal reference frame that incorporates vestibulo-proprioceptive input and motor signals. In addition, top-down modulation of visual processing has previously been demonstrated during cognitive operations including selective attention and working memory tasks. In the absence of a stable gravitational reference, the updating of salient stimuli becomes crucial for successful visuo-spatial behavior by humans in weightlessness. Here we found that visually-evoked potentials triggered by the image of a tunnel just prior to an impending 3D movement in a virtual navigation task were altered in weightlessness aboard the International Space Station, while those evoked by a classical 2D-checkerboard were not. Specifically, the analysis of event-related spectral perturbations and inter-trial phase coherency of these EEG signals recorded in the frontal and occipital areas showed that phase-locking of theta-alpha oscillations was suppressed in weightlessness, but only for the 3D tunnel image. Moreover, analysis of the phase of the coherency demonstrated the existence on Earth of a directional flux in the EEG signals from the frontal to the occipital areas mediating a top-down modulation during the presentation of the image of the 3D tunnel. In weightlessness, this fronto-occipital, top-down control was transformed into a diverging flux from the central areas toward the frontal and occipital areas. These results demonstrate that gravity-related sensory inputs modulate primary visual areas depending on the affordances of the visual scene.
Numerosity processing in early visual cortex.
Fornaciai, Michele; Brannon, Elizabeth M; Woldorff, Marty G; Park, Joonkoo
2017-08-15
While parietal cortex is thought to be critical for representing numerical magnitudes, we recently reported an event-related potential (ERP) study demonstrating selective neural sensitivity to numerosity over midline occipital sites very early in the time course, suggesting the involvement of early visual cortex in numerosity processing. However, which specific brain area underlies such early activation is not known. Here, we tested whether numerosity-sensitive neural signatures arise specifically from the initial stages of visual cortex, aiming to localize the generator of these signals by taking advantage of the distinctive folding pattern of early occipital cortices around the calcarine sulcus, which predicts an inversion of polarity of ERPs arising from these areas when stimuli are presented in the upper versus lower visual field. Dot arrays, including 8-32dots constructed systematically across various numerical and non-numerical visual attributes, were presented randomly in either the upper or lower visual hemifields. Our results show that neural responses at about 90ms post-stimulus were robustly sensitive to numerosity. Moreover, the peculiar pattern of polarity inversion of numerosity-sensitive activity at this stage suggested its generation primarily in V2 and V3. In contrast, numerosity-sensitive ERP activity at occipito-parietal channels later in the time course (210-230ms) did not show polarity inversion, indicating a subsequent processing stage in the dorsal stream. Overall, these results demonstrate that numerosity processing begins in one of the earliest stages of the cortical visual stream. Copyright © 2017 Elsevier Inc. All rights reserved.
Visualizing global change: earth and biodiversity sciences for museum settings using HDTV
NASA Astrophysics Data System (ADS)
Duba, A.; Gardiner, N.; Kinzler, R.; Trakinski, V.
2006-12-01
Science Bulletins, a production group at the American Museum of Natural History (New York, USA), brings biological and Earth system science data and concepts to over 10 million visitors per year at 27 institutions around the U.S.A. Our target audience is diverse, from novice to expert. News stories and visualizations use the capabilities of satellite imagery to focus public attention on four general themes: human influences on species and ecosystems across all observable spatial extents; biotic feedbacks with the Earth's physical system; characterizing species and ecosystems; and recent events such as natural changes to ecosystems, major findings and publications, or recent syntheses. For Earth science, we use recent natural events to explain the broad scientific concepts of tectonic activity and the processes that underlie climate and weather events. Visualizations show the global, dynamic distribution of atmospheric constituents, ocean temperature and temperature anomaly, and sea ice. Long-term changes are set in contrast to seasonal and longer-term cycles so that viewers appreciate the variety of forces that affect Earth's physical system. We illustrate concepts at a level appropriate for a broad audience to learn more about the dynamic nature of Earth's biota and physical processes. Programming also includes feature stories that explain global change phenomena from the perspectives of eminent scientists and managers charged with implementing public policy based on the best available science. Over the past two and one-half years, biological science stories have highlighted applied research addressing lemur conservation in Madagascar, marine protected areas in the Bahamas, effects of urban sprawl on wood turtles in New England, and taxonomic surveys of marine jellies in Monterey Bay. Earth science stories have addressed the volcanic history of present-day Yellowstone National Park, tsunamis, the disappearance of tropical mountain glaciers, the North Atlantic Oscillation, and the oxygenation of the atmosphere. All of these visualizations and HD videos are accessible via the worldwide web with accompanying explanatory material. Periodic surveys of visitors indicate that these media are popular and are effective at communicating important biological and Earth system science concepts to the general public.
van Schie, Hein T; Wijers, Albertus A; Mars, Rogier B; Benjamins, Jeroen S; Stowe, Laurie A
2005-05-01
Event-related brain potentials were used to study the retrieval of visual semantic information to concrete words, and to investigate possible structural overlap between visual object working memory and concreteness effects in word processing. Subjects performed an object working memory task that involved 5 s retention of simple 4-angled polygons (load 1), complex 10-angled polygons (load 2), and a no-load baseline condition. During the polygon retention interval subjects were presented with a lexical decision task to auditory presented concrete (imageable) and abstract (nonimageable) words, and pseudowords. ERP results are consistent with the use of object working memory for the visualisation of concrete words. Our data indicate a two-step processing model of visual semantics in which visual descriptive information of concrete words is first encoded in semantic memory (indicated by an anterior N400 and posterior occipital positivity), and is subsequently visualised via the network for object working memory (reflected by a left frontal positive slow wave and a bilateral occipital slow wave negativity). Results are discussed in the light of contemporary models of semantic memory.
Hanken, Taylor; Young, Sam; Smilowitz, Karen; Chiampas, George; Waskowski, David
2016-10-01
As one of the largest marathons worldwide, the Bank of America Chicago Marathon (BACCM; Chicago, Illinois USA) accumulates high volumes of data. Race organizers and engaged agencies need the ability to access specific data in real-time. This report details a data visualization system designed for the Chicago Marathon and establishes key principles for event management data visualization. The data visualization system allows for efficient data communication among the organizing agencies of Chicago endurance events. Agencies can observe the progress of the race throughout the day and obtain needed information, such as the number and location of runners on the course and current weather conditions. Implementation of the system can reduce time-consuming, face-to-face interactions between involved agencies by having key data streams in one location, streamlining communications with the purpose of improving race logistics, as well as medical preparedness and response. Hanken T , Young S , Smilowitz K , Chiampas G , Waskowski D . Developing a data visualization system for the Bank of America Chicago Marathon (Chicago, Illinois USA). Prehosp Disaster Med. 2016;31(5):572-577.
Kim, Kyung Hwan; Kim, Ja Hyun
2006-02-20
The aim of this study was to compare spatiotemporal cortical activation patterns during the visual perception of Korean, English, and Chinese words. The comparison of these three languages offers an opportunity to study the effect of written forms on cortical processing of visually presented words, because of partial similarity/difference among words of these languages, and the familiarity of native Koreans with these three languages at the word level. Single-character words and pictograms were excluded from the stimuli in order to activate neuronal circuitries that are involved only in word perception. Since a variety of cerebral processes are sequentially evoked during visual word perception, a high-temporal resolution is required and thus we utilized event-related potential (ERP) obtained from high-density electroencephalograms. The differences and similarities observed from statistical analyses of ERP amplitudes, the correlation between ERP amplitudes and response times, and the patterns of current source density, appear to be in line with demands of visual and semantic analysis resulting from the characteristics of each language, and the expected task difficulties for native Korean subjects.
Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2008-09-16
Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.
Osmolarity: a decisive parameter of bowel agents in intestinal magnetic resonance imaging.
Borthne, Arne S; Abdelnoor, Michael; Storaas, Trygve; Pierre-Jerome, Claude; Kløw, Nils-E
2006-06-01
The aim was to evaluate the importance of the osmolarity of different oral agents for bowel distension and the level of related adverse events. The longitudinal design included the exposition of different oral MR agents on two separate occasions. Four groups of volunteers were randomly given 350 ml gastrografin of three different concentrations and water. On the second occasion they received mannitol, iohexol or iodixanol with equivalent osmolarities, but the control group (water) received mannitol. We recorded the outcomes as the degree of bowel distension determined as the mean bowel section area and the total level of discomfort recorded from a visual analogue scale (VAS). The statistical analysis included scatter plots with the best-fitted line with linear regression to study the association between osmolarity and section area and the association between osmolarity and adverse events. A dose-response association was found between increasing osmolarity levels and bowel area in square centimeters (P = 0.00001). A similar dose-response association existed between increasing levels of osmolarity and adverse events (P = 0.001). Osmolarity appears to be more important for bowel distension than the physico-chemical characteristics of the nonabsorbable oral agents. The optimum osmolarity level is determined by the patient's tolerance of the adverse events.
A computer model of context-dependent perception in a very simple world
NASA Astrophysics Data System (ADS)
Lara-Dammer, Francisco; Hofstadter, Douglas R.; Goldstone, Robert L.
2017-11-01
We propose the foundations of a computer model of scientific discovery that takes into account certain psychological aspects of human observation of the world. To this end, we simulate two main components of such a system. The first is a dynamic microworld in which physical events take place, and the second is an observer that visually perceives entities and events in the microworld. For reason of space, this paper focuses only on the starting phase of discovery, which is the relatively simple visual inputs of objects and collisions.
Airway mechanics and methods used to visualize smooth muscle dynamics in vitro.
Cooper, P R; McParland, B E; Mitchell, H W; Noble, P B; Politi, A Z; Ressmeyer, A R; West, A R
2009-10-01
Contraction of airway smooth muscle (ASM) is regulated by the physiological, structural and mechanical environment in the lung. We review two in vitro techniques, lung slices and airway segment preparations, that enable in situ ASM contraction and airway narrowing to be visualized. Lung slices and airway segment approaches bridge a gap between cell culture and isolated ASM, and whole animal studies. Imaging techniques enable key upstream events involved in airway narrowing, such as ASM cell signalling and structural and mechanical events impinging on ASM, to be investigated.
StreamSqueeze: a dynamic stream visualization for monitoring of event data
NASA Astrophysics Data System (ADS)
Mansmann, Florian; Krstajic, Milos; Fischer, Fabian; Bertini, Enrico
2012-01-01
While in clear-cut situations automated analytical solution for data streams are already in place, only few visual approaches have been proposed in the literature for exploratory analysis tasks on dynamic information. However, due to the competitive or security-related advantages that real-time information gives in domains such as finance, business or networking, we are convinced that there is a need for exploratory visualization tools for data streams. Under the conditions that new events have higher relevance and that smooth transitions enable traceability of items, we propose a novel dynamic stream visualization called StreamSqueeze. In this technique the degree of interest of recent items is expressed through an increase in size and thus recent events can be shown with more details. The technique has two main benefits: First, the layout algorithm arranges items in several lists of various sizes and optimizes the positions within each list so that the transition of an item from one list to the other triggers least visual changes. Second, the animation scheme ensures that for 50 percent of the time an item has a static screen position where reading is most effective and then continuously shrinks and moves to the its next static position in the subsequent list. To demonstrate the capability of our technique, we apply it to large and high-frequency news and syslog streams and show how it maintains optimal stability of the layout under the conditions given above.
Light flash observations during Apollo-Soyuz
NASA Technical Reports Server (NTRS)
Budinger, T. F.; Tobias, C. A.; Huesman, R. H.; Upham, F. T.; Wieskamp, T. F.; Schott, J. U.; Schopper, E.
1976-01-01
A total of 82 visual events was reported by two dark-adapted astronauts during a 90-minute orbit at 225 km altitude. Silver chloride crystal events for that orbit totaled 69 stopping protons and alphas per sq cm and 304 heavy ions with stopping power of 150 MeV sq cm/g or greater. The frequency of visual observations near the geomagnetic poles corresponds to calculated abundances of ions with LET greater than 5 keV per micrometer in tissue. Nuclear collisions of fast protons on C, N, and O in the retina or the abundance of stopping protons can explain the low frequency of events in the SAA for this mission in comparison with the high frequency during Skylab IV at 443 km altitude.
Bathelt, Joe; Dale, Naomi; de Haan, Michelle
2017-10-01
Communication with visual signals, like facial expression, is important in early social development, but the question if these signals are necessary for typical social development remains to be addressed. The potential impact on social development of being born with no or very low levels of vision is therefore of high theoretical and clinical interest. The current study investigated event-related potential responses to basic social stimuli in a rare group of school-aged children with congenital visual disorders of the anterior visual system (globe of the eye, retina, anterior optic nerve). Early-latency event-related potential responses showed no difference between the VI and control group, suggesting similar initial auditory processing. However, the mean amplitude over central and right frontal channels between 280 and 320ms was reduced in response to own-name stimuli, but not control stimuli, in children with VI suggesting differences in social processing. Children with VI also showed an increased rate of autistic-related behaviours, pragmatic language deficits, as well as peer relationship and emotional problems on standard parent questionnaires. These findings suggest that vision may be necessary for the typical development of social processing across modalities. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Component processes underlying future thinking.
D'Argembeau, Arnaud; Ortoleva, Claudia; Jumentier, Sabrina; Van der Linden, Martial
2010-09-01
This study sought to investigate the component processes underlying the ability to imagine future events, using an individual-differences approach. Participants completed several tasks assessing different aspects of future thinking (i.e., fluency, specificity, amount of episodic details, phenomenology) and were also assessed with tasks and questionnaires measuring various component processes that have been hypothesized to support future thinking (i.e., executive processes, visual-spatial processing, relational memory processing, self-consciousness, and time perspective). The main results showed that executive processes were correlated with various measures of future thinking, whereas visual-spatial processing abilities and time perspective were specifically related to the number of sensory descriptions reported when specific future events were imagined. Furthermore, individual differences in self-consciousness predicted the subjective feeling of experiencing the imagined future events. These results suggest that future thinking involves a collection of processes that are related to different facets of future-event representation.
Förster resonance energy transfer as a tool to study photoreceptor biology
Hovan, Stephanie C.; Howell, Scott; Park, Paul S.-H.
2010-01-01
Vision is initiated in photoreceptor cells of the retina by a set of biochemical events called phototransduction. These events occur via coordinated dynamic processes that include changes in secondary messenger concentrations, conformational changes and post-translational modifications of signaling proteins, and protein-protein interactions between signaling partners. A complete description of the orchestration of these dynamic processes is still unavailable. Described in this work is the first step in the development of tools combining fluorescent protein technology, Förster resonance energy transfer (FRET), and transgenic animals that have the potential to reveal important molecular insights about the dynamic processes occurring in photoreceptor cells. We characterize the fluorescent proteins SCFP3A and SYFP2 for use as a donor-acceptor pair in FRET assays, which will facilitate the visualization of dynamic processes in living cells. We also demonstrate the targeted expression of these fluorescent proteins to the rod photoreceptor cells of Xenopus laevis, and describe a general method for detecting FRET in these cells. The general approaches described here can address numerous types of questions related to phototransduction and photoreceptor biology by providing a platform to visualize dynamic processes in molecular detail within a native context. PMID:21198205
Vision-based Detection of Acoustic Timed Events: a Case Study on Clarinet Note Onsets
NASA Astrophysics Data System (ADS)
Bazzica, A.; van Gemert, J. C.; Liem, C. C. S.; Hanjalic, A.
2017-05-01
Acoustic events often have a visual counterpart. Knowledge of visual information can aid the understanding of complex auditory scenes, even when only a stereo mixdown is available in the audio domain, \\eg identifying which musicians are playing in large musical ensembles. In this paper, we consider a vision-based approach to note onset detection. As a case study we focus on challenging, real-world clarinetist videos and carry out preliminary experiments on a 3D convolutional neural network based on multiple streams and purposely avoiding temporal pooling. We release an audiovisual dataset with 4.5 hours of clarinetist videos together with cleaned annotations which include about 36,000 onsets and the coordinates for a number of salient points and regions of interest. By performing several training trials on our dataset, we learned that the problem is challenging. We found that the CNN model is highly sensitive to the optimization algorithm and hyper-parameters, and that treating the problem as binary classification may prevent the joint optimization of precision and recall. To encourage further research, we publicly share our dataset, annotations and all models and detail which issues we came across during our preliminary experiments.
Karpecki, Paul; Depaolis, Michael; Hunter, Judy A; White, Eric M; Rigel, Lee; Brunner, Lynne S; Usner, Dale W; Paterno, Michael R; Comstock, Timothy L
2009-03-01
Besifloxacin ophthalmic suspension 0.6% is a new topical fluoroquinolone for the treatment of bacterial conjunctivitis. Besifloxacin has potent in vitro activity against a broad spectrum of ocular pathogens, including drug-resistant strains. The primary objective of this study was to compare the clinical and microbiologic efficacy of besifloxacin ophthalmic suspension 0.6% with that of vehicle (the formulation without besifloxacin) in the treatment of bacterial conjunctivitis. This was a multicenter, prospective, randomized, double-masked, vehicle-controlled, parallel-group study in patients with acute bacterial conjunctivitis. Patients received either topical besifloxacin ophthalmic suspension or vehicle administered 3 times daily for 5 days. At study entry and on days 4 and 8 (visits 2 and 3), a clinical assessment of ocular signs and symptoms was performed in both eyes, as well as pinhole visual acuity testing, biomicroscopy, and culture of the infected eye(s). An ophthalmoscopic examination was performed at study entry and on day 8. The primary efficacy outcome measures were clinical resolution and eradication of the baseline bacterial infection on day 8 in culture-confirmed patients. The safety evaluation included adverse events, changes in visual acuity, and biomicroscopy and ophthalmoscopy findings in all patients who received at least 1 dose of active treatment or vehicle. The safety population consisted of 269 patients (mean [SD] age, 34.2 [22.3] years; 60.2% female; 82.5% white) with acute bacterial conjunctivitis. The culture-confirmed intent-to-treat population consisted of 118 patients (60 besifloxacin ophthalmic suspension, 58 vehicle). Significantly more patients receiving besifloxacin ophthalmic suspension than vehicle had clinical resolution of the baseline infection at visit 3 (44/60 [73.3%] vs 25/58 [43.1%], respectively; P < 0.001). Rates of bacterial eradication also were significantly greater with besifloxacin ophthalmic suspension compared with vehicle at visit 3 (53/60 [88.3%] vs35/58 [60.3%]; P < 0.001). The cumulative frequency of adverse events did not differ significantly between the 2 groups (69/137 [50.4%] and 70/132 [53.0%]). The most common ocular adverse events were eye pain (20/190 treated eyes [10.5%] and 13/188 [6.9%]), blurred vision (20/190 [10.5%] and 22/188 [11.7%]), and eye irritation (14/190 [7.4%] and 23/188 [12.2%]); these events were of mild or moderate severity. Changes in visual acuity and treatment-emergent events observed on biomicroscopy and direct ophthalmoscopy also were comparable between treatment groups. Besifloxacin ophthalmic suspension 0.6% given 3 times daily for 5 days was both efficacious and well tolerated compared with vehicle in the treatment of these patients with bacterial conjunctivitis. ClinicalTrials.gov Identifier: NCT00622908.
Comparison of animated jet stream visualizations
NASA Astrophysics Data System (ADS)
Nocke, Thomas; Hoffmann, Peter
2016-04-01
The visualization of 3D atmospheric phenomena in space and time is still a challenging problem. In particular, multiple solutions of animated jet stream visualizations have been produced in recent years, which were designed to visually analyze and communicate the jet and related impacts on weather circulation patterns and extreme weather events. This PICO integrates popular and new jet animation solutions and inter-compares them. The applied techniques (e.g. stream lines or line integral convolution) and parametrizations (color mapping, line lengths) are discussed with respect to visualization quality criteria and their suitability for certain visualization tasks (e.g. jet patterns and jet anomaly analysis, communicating its relevance for climate change).
ERIC Educational Resources Information Center
Takaya, Kentei
2016-01-01
Visual literacy is an important skill for students to have in order to interpret embedded messages on signs and in advertisements successfully. As advertisements today tend to feature iconic people or events that shaped the modern world, it is crucial to develop students' visual literacy skills so they can comprehend the intended messages. This…
Interactive Visualization and Analysis of Geospatial Data Sets - TrikeND-iGlobe
NASA Astrophysics Data System (ADS)
Rosebrock, Uwe; Hogan, Patrick; Chandola, Varun
2013-04-01
The visualization of scientific datasets is becoming an ever-increasing challenge as advances in computing technologies have enabled scientists to build high resolution climate models that have produced petabytes of climate data. To interrogate and analyze these large datasets in real-time is a task that pushes the boundaries of computing hardware and software. But integration of climate datasets with geospatial data requires considerable amount of effort and close familiarity of various data formats and projection systems, which has prevented widespread utilization outside of climate community. TrikeND-iGlobe is a sophisticated software tool that bridges this gap, allows easy integration of climate datasets with geospatial datasets and provides sophisticated visualization and analysis capabilities. The objective for TrikeND-iGlobe is the continued building of an open source 4D virtual globe application using NASA World Wind technology that integrates analysis of climate model outputs with remote sensing observations as well as demographic and environmental data sets. This will facilitate a better understanding of global and regional phenomenon, and the impact analysis of climate extreme events. The critical aim is real-time interactive interrogation. At the data centric level the primary aim is to enable the user to interact with the data in real-time for the purpose of analysis - locally or remotely. TrikeND-iGlobe provides the basis for the incorporation of modular tools that provide extended interactions with the data, including sub-setting, aggregation, re-shaping, time series analysis methods and animation to produce publication-quality imagery. TrikeND-iGlobe may be run locally or can be accessed via a web interface supported by high-performance visualization compute nodes placed close to the data. It supports visualizing heterogeneous data formats: traditional geospatial datasets along with scientific data sets with geographic coordinates (NetCDF, HDF, etc.). It also supports multiple data access mechanisms, including HTTP, FTP, WMS, WCS, and Thredds Data Server (for NetCDF data and for scientific data, TrikeND-iGlobe supports various visualization capabilities, including animations, vector field visualization, etc. TrikeND-iGlobe is a collaborative open-source project, contributors include NASA (ARC-PX), ORNL (Oakridge National Laboratories), Unidata, Kansas University, CSIRO CMAR Australia and Geoscience Australia.
Mann, David L; Abernethy, Bruce; Farrow, Damian; Davis, Mark; Spratford, Wayne
2010-05-01
This article describes a new automated method for the controlled occlusion of vision during natural tasks. The method permits the time course of the presence or absence of visual information to be linked to identifiable events within the task of interest. An example application is presented in which the method is used to examine the ability of cricket batsmen to pick up useful information from the prerelease movement patterns of the opposing bowler. Two key events, separated by a consistent within-action time lag, were identified in the cricket bowling action sequence-namely, the penultimate foot strike prior to ball release (Event 1), and the subsequent moment of ball release (Event 2). Force-plate registration of Event 1 was then used as a trigger to facilitate automated occlusion of vision using liquid crystal occlusion goggles at time points relative to Event 2. Validation demonstrated that, compared with existing approaches that are based on manual triggering, this method of occlusion permitted considerable gains in temporal precision and a reduction in the number of unusable trials. A more efficient and accurate protocol to examine anticipation is produced, while preserving the important natural coupling between perception and action.
Reflex epilepsy and reflex seizures of the visual system: a clinical review.
Zifkin, B G; Kasteleijn-Nolst Trenité, D
2000-09-01
Reflex epilepsy of the visual system is charecterised by seizures precipitated by visual stimuli. EEG responses to intermittent photic stimulation depend on the age and sex of the subject and on how stimulation is performed: abnormalities are commonest in children and adolescents, especially girls. Only generalised paroxysmal epileptiform discharges are clearly linked to epilepsy. Abnormal responses may occur in asymptomatic subjects, especially children. Photosensitivity has an important genetic component. Some patients are sensitive to patterns, suggesting an occipital trigger for these events. Myoclonus and generalised convulsive and nonconvulsive seizures may be triggered by visual stimuli. Partial seizures occur less often and can be confused with migraine. Although usually idiopathic, photosensitive epilepsy may occur in degenerative diseases and some patients with photosensitive partial seizures have brain lesions. Sunlight and video screens, including television, video games, and computer displays, are the commonest environmental triggers of photosensitive seizures. Outbreaks of triggered seizures have occurred when certain flashing or patterned images have been broadcast. There are regulations to prevent this in some countries only. Pure photosensitive epilepsy has a good prognosis. There is a role for treatment with and without antiepileptic drugs, but photosensitivity usually does not disappear spontaneously, and then typically in the third decade.
Udagawa, Sachiko; Iwase, Aiko; Susuki, Yuto; Kunimatsu-Sanuki, Shiho; Fukuchi, Takeo; Matsumoto, Chota; Ohno, Yuko; Ono, Hiroshi; Sugiyama, Kazuhisa; Araie, Makoto
2018-01-01
Purpose Traffic accidents are associated with the visual function of drivers, as well as many other factors. Driving simulator systems have the advantage of controlling for traffic- and automobile-related conditions, and using pinhole glasses can control the degree of concentric concentration of the visual field. We evaluated the effect of concentric constriction of the visual field on automobile driving, using driving simulator tests. Methods Subjects meeting criteria for normal eyesight were included in the study. Pinhole glasses with variable aperture sizes were adjusted to mimic the conditions of concentric visual field constrictions of 10° and 15°, using a CLOCK CHART®. The test contained 8 scenarios (2 oncoming right-turning cars and 6 jump-out events from the side). Results Eighty-eight subjects were included in the study; 37 (mean age = 52.9±15.8 years) subjects were assigned to the 15° group, and 51 (mean = 48.6±15.5 years) were assigned to the 10° group. For all 8 scenarios, the number of accidents was significantly higher among pinhole wearing subjects. The average number of all types of accidents per person was significantly higher in the pinhole 10° group (4.59±1.81) than the pinhole 15° group (3.68±1.49) (P = 0.032). The number of accidents associated with jump-out scenarios, in which a vehicle approaches from the side on a straight road with a good view, was significantly higher in the pinhole 10° group than in the pinhole 15° group. Conclusions Concentric constriction of the visual field was associated with increased number of traffic accidents. The simulation findings indicated that a visual field of 10° to 15° may be important for avoiding collisions in places where there is a straight road with a good view. PMID:29538425
[Postoperative visual loss due to conversion disorder after spine surgery: a case report].
Bezerra, Dailson Mamede; Bezerra, Eglantine Mamede; Silva Junior, Antonio Jorge; Amorim, Marco Aurélio Soares; Miranda, Denismar Borges de
Patients undergoing spinal surgeries may develop postoperative visual loss. We present a case of total bilateral visual loss in a patient who, despite having clinical and surgical risk factors for organic lesion, evolved with visual disturbance due to conversion disorder. A male patient, 39 years old, 71kg, 1.72 m, ASA I, admitted to undergo fusion and discectomy at L4-L5 and L5-S1. Venoclysis, cardioscopy, oximetry, NIBP; induction with remifentanil, propofol and rocuronium; intubation with ETT (8.0mm) followed by capnography and urinary catheterization for diuresis. Maintenance with full target-controlled intravenous anesthesia. During fixation and laminectomy, the patient developed severe bleeding and hypovolemic shock. After 30minutes, hemostasis and hemodynamic stability was achieved with infusion of norepinephrine, volume expansion, and blood products. In the ICU, the patient developed mental confusion, weakness in the limbs, and bilateral visual loss. It was not possible to identify clinical, laboratory or image findings of organic lesion. He evolved with episodes of anxiety, emotional lability, and language impairment; the hypothesis of conversion syndrome with visual component was raised after psychiatric evaluation. The patient had complete resolution of symptoms after visual education and introduction of low doses of antipsychotic, antidepressant, and benzodiazepine. Other symptoms also regressed, and the patient was discharged 12 days after surgery. After 60 days, the patient had no more symptoms. Conversion disorders may have different signs and symptoms of non-organic origin, including visual component. It is noteworthy that the occurrence of this type of visual dysfunction in the postoperative period of spinal surgery is a rare event and should be remembered as a differential diagnosis. Copyright © 2015 Sociedade Brasileira de Anestesiologia. Publicado por Elsevier Editora Ltda. All rights reserved.
When the “I” Looks at the “Me”: Autobiographical Memory, Visual Perspective, and the Self
Sutin, Angelina R.; Robins, Richard W.
2009-01-01
This article presents a theoretical model of the self processes involved in autobiographical memories and proposes competing hypotheses for the role of visual perspective in autobiographical memory retrieval. Autobiographical memories can be retrieved from either the 1st person perspective, in which individuals see the event through their own eyes, or from the 3rd person perspective, in which individuals see themselves and the event from the perspective of an external observer. A growing body of research suggests that the visual perspective from which a memory is retrieved has important implications for a person's thoughts, feelings, and goals, and is integrally related to a host of self-evaluative processes. We review the relevant research literature, present our theoretical model, and outline directions for future research. PMID:18848783
When the "I" looks at the "Me": autobiographical memory, visual perspective, and the self.
Sutin, Angelina R; Robins, Richard W
2008-12-01
This article presents a theoretical model of the self processes involved in autobiographical memories and proposes competing hypotheses for the role of visual perspective in autobiographical memory retrieval. Autobiographical memories can be retrieved from either the 1st person perspective, in which individuals see the event through their own eyes, or from the 3rd person perspective, in which individuals see themselves and the event from the perspective of an external observer. A growing body of research suggests that the visual perspective from which a memory is retrieved has important implications for a person's thoughts, feelings, and goals, and is integrally related to a host of self-evaluative processes. We review the relevant research literature, present our theoretical model, and outline directions for future research.
A multicenter study on the health-related quality of life of cataract patients: baseline data.
Yamada, Masakazu; Mizuno, Yoshinobu; Miyake, Yozo
2009-09-01
This study examines the impact of cataracts on health-related quality of life (HR-QOL) and health events in the older population. The study population consisted of 439 unoperated cataract patients aged 60 years or older who visited any of the facilities affiliated with the Cataract Survey Group of the National Hospital Organization of Japan, which has been conducting a prospective multicenter cohort study on cataract patients. HR-QOL of the patients was assessed using the Japanese version of Visual Function Questionnaire-25 (VFQ-25) and the 8-Item Short-Form Health Survey (SF-8). The health condition and health events of the patients were also investigated. The average age of the 439 patients enrolled (126 men and 313 women) was 73.0 +/- 7.1 years. There were 323 patients with comorbidities (73.6%), 81 of whom (23.7%) felt it was hard to visit the hospital owing to their visual impairment. In the previous year, 74 patients (16.9%) had experienced a fall and 14 (3.2%) had been in a traffic accident. Of those, 43.2% and 8.3% respectively answered that the falls and the accident could have been triggered by their visual impairment. When the patients were classified according to visual acuity, most of the VFQ-25 subscale scores declined significantly with decreasing visual acuity, whereas the SF-8 scores showed no significant change. The participants of this study were patients with unoperated cataract, and thus the decline of HR-QOL was modest. The survey of health events, however, revealed that the visual constraint has a certain impact on the daily lives of the older population.
Liu, Ying; Hu, Huijing; Jones, Jeffery A; Guo, Zhiqiang; Li, Weifeng; Chen, Xi; Liu, Peng; Liu, Hanjun
2015-08-01
Speakers rapidly adjust their ongoing vocal productions to compensate for errors they hear in their auditory feedback. It is currently unclear what role attention plays in these vocal compensations. This event-related potential (ERP) study examined the influence of selective and divided attention on the vocal and cortical responses to pitch errors heard in auditory feedback regarding ongoing vocalisations. During the production of a sustained vowel, participants briefly heard their vocal pitch shifted up two semitones while they actively attended to auditory or visual events (selective attention), or both auditory and visual events (divided attention), or were not told to attend to either modality (control condition). The behavioral results showed that attending to the pitch perturbations elicited larger vocal compensations than attending to the visual stimuli. Moreover, ERPs were likewise sensitive to the attentional manipulations: P2 responses to pitch perturbations were larger when participants attended to the auditory stimuli compared to when they attended to the visual stimuli, and compared to when they were not explicitly told to attend to either the visual or auditory stimuli. By contrast, dividing attention between the auditory and visual modalities caused suppressed P2 responses relative to all the other conditions and caused enhanced N1 responses relative to the control condition. These findings provide strong evidence for the influence of attention on the mechanisms underlying the auditory-vocal integration in the processing of pitch feedback errors. In addition, selective attention and divided attention appear to modulate the neurobehavioral processing of pitch feedback errors in different ways. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
IBES: a tool for creating instructions based on event segmentation
Mura, Katharina; Petersen, Nils; Huff, Markus; Ghose, Tandra
2013-01-01
Receiving informative, well-structured, and well-designed instructions supports performance and memory in assembly tasks. We describe IBES, a tool with which users can quickly and easily create multimedia, step-by-step instructions by segmenting a video of a task into segments. In a validation study we demonstrate that the step-by-step structure of the visual instructions created by the tool corresponds to the natural event boundaries, which are assessed by event segmentation and are known to play an important role in memory processes. In one part of the study, 20 participants created instructions based on videos of two different scenarios by using the proposed tool. In the other part of the study, 10 and 12 participants respectively segmented videos of the same scenarios yielding event boundaries for coarse and fine events. We found that the visual steps chosen by the participants for creating the instruction manual had corresponding events in the event segmentation. The number of instructional steps was a compromise between the number of fine and coarse events. Our interpretation of results is that the tool picks up on natural human event perception processes of segmenting an ongoing activity into events and enables the convenient transfer into meaningful multimedia instructions for assembly tasks. We discuss the practical application of IBES, for example, creating manuals for differing expertise levels, and give suggestions for research on user-oriented instructional design based on this tool. PMID:24454296
Baselining PMU Data to Find Patterns and Anomalies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amidan, Brett G.; Follum, James D.; Freeman, Kimberly A.
This paper looks at the application of situational awareness methodologies with respect to power grid data. These methodologies establish baselines that look for typical patterns and atypical behavior in the data. The objectives of the baselining analyses are to provide: real-time analytics, the capability to look at historical trends and events, and reliable predictions of the near future state of the grid. Multivariate algorithms were created to establish normal baseline behavior and then score each moment in time according to its variance from the baseline. Detailed multivariate analytical techniques are described in this paper that produced ways to identify typicalmore » patterns and atypical behavior. In this case, atypical behavior is behavior that is unenvisioned. Visualizations were also produced to help explain the behavior that was identified mathematically. Examples are shown to help describe how to read and interpret the analyses and visualizations. Preliminary work has been performed on PMU data sets from BPA (Bonneville Power Administration) and EI (Eastern Interconnect). Actual results are not fully shown here because of confidentiality issues. Comparisons between atypical events found mathematically and actual events showed that many of the actual events are also atypical events; however there are many atypical events that do not correlate to any actual events. Additional work needs to be done to help classify the atypical events into actual events, so that the importance of the events can be better understood.« less
IBES: a tool for creating instructions based on event segmentation.
Mura, Katharina; Petersen, Nils; Huff, Markus; Ghose, Tandra
2013-12-26
Receiving informative, well-structured, and well-designed instructions supports performance and memory in assembly tasks. We describe IBES, a tool with which users can quickly and easily create multimedia, step-by-step instructions by segmenting a video of a task into segments. In a validation study we demonstrate that the step-by-step structure of the visual instructions created by the tool corresponds to the natural event boundaries, which are assessed by event segmentation and are known to play an important role in memory processes. In one part of the study, 20 participants created instructions based on videos of two different scenarios by using the proposed tool. In the other part of the study, 10 and 12 participants respectively segmented videos of the same scenarios yielding event boundaries for coarse and fine events. We found that the visual steps chosen by the participants for creating the instruction manual had corresponding events in the event segmentation. The number of instructional steps was a compromise between the number of fine and coarse events. Our interpretation of results is that the tool picks up on natural human event perception processes of segmenting an ongoing activity into events and enables the convenient transfer into meaningful multimedia instructions for assembly tasks. We discuss the practical application of IBES, for example, creating manuals for differing expertise levels, and give suggestions for research on user-oriented instructional design based on this tool.
NASA Astrophysics Data System (ADS)
Ding, R.; He, T.
2017-12-01
With the increased popularity in mobile applications and services, there has been a growing demand for more advanced mobile technologies that utilize real-time Location Based Services (LBS) data to support natural hazard response efforts. Compared to traditional sources like the census bureau that often can only provide historical and static data, an LBS service can provide more current data to drive a real-time natural hazard response system to more accurately process and assess issues such as population density in areas impacted by a hazard. However, manually preparing or preprocessing the data to suit the needs of the particular application would be time-consuming. This research aims to implement a population heatmap visual analytics system based on real-time data for natural disaster emergency management. System comprised of a three-layered architecture, including data collection, data processing, and visual analysis layers. Real-time, location-based data meeting certain polymerization conditions are collected from multiple sources across the Internet, then processed and stored in a cloud-based data store. Parallel computing is utilized to provide fast and accurate access to the pre-processed population data based on criteria such as the disaster event and to generate a location-based population heatmap as well as other types of visual digital outputs using auxiliary analysis tools. At present, a prototype system, which geographically covers the entire region of China and combines population heat map based on data from the Earthquake Catalogs database has been developed. It Preliminary results indicate that the generation of dynamic population density heatmaps based on the prototype system has effectively supported rapid earthquake emergency rescue and evacuation efforts as well as helping responders and decision makers to evaluate and assess earthquake damage. Correlation analyses that were conducted revealed that the aggregation and movement of people depended on various factors, including earthquake occurrence time and location of epicenter. This research hopes to continue to build upon the success of the prototype system in order to improve and extend the system to support the analysis of earthquakes and other types of natural hazard events.
Electrophysiological evidence for phenomenal consciousness.
Revonsuo, Antti; Koivisto, Mika
2010-09-01
Abstract Recent evidence from event-related brain potentials (ERPs) lends support to two central theses in Lamme's theory. The earliest ERP correlate of visual consciousness appears over posterior visual cortex around 100-200 ms after stimulus onset. Its scalp topography and time window are consistent with recurrent processing in the visual cortex. This electrophysiological correlate of visual consciousness is mostly independent of later ERPs reflecting selective attention and working memory functions. Overall, the ERP evidence supports the view that phenomenal consciousness of a visual stimulus emerges earlier than access consciousness, and that attention and awareness are served by distinct neural processes.
NASA Astrophysics Data System (ADS)
Nagai, Hiroto; Watanabe, Manabu; Tomii, Naoya
2016-04-01
A major earthquake, measuring 7.8 Mw, occurred on April 25, 2015, in Lamjung district, central Nepal, causing more than 9,000 deaths and 23,000 injuries. During the event, termed the 2015 Gorkha earthquake, the most catastrophic collapse of the mountain side was reported in the Langtang Valley, located 60 km north of Kathmandu. In this collapse, a huge boulder-rich avalanche and a sudden air pressure wave traveled from a steep south-facing slope to the bottom of a U-shaped valley, resulting in more than 170 deaths. Accurate in-situ surveys are necessary to investigate such events, and to find out ways to avoid similar catastrophic events in the future. Geospatial information obtained from multiple satellite observations is invaluable for such surveys in remote mountain regions. In this study, we (1) identify the collapsed sediment using synthetic aperture radar, (2) conduct detailed mapping using high-resolution optical imagery, and (3) estimate sediment volumes from digital surface models in order to quantify the immediate situation of the avalanched sediment. (1) Visual interpretation and coherence calculations using Phased Array type L-band Synthetic Aperture Radar-2 (PALSAR-2) images give a consistent area of sediment cover. Emergency observation was carried out the day after the earthquake, using the PALSAR-2 onboard the Advanced Land Observing Satellite-2 (ALOS-2, "DAICHI-2"). Visual interpretation of orthorectified backscatter amplitude images revealed completely altered surface features, over which the identifiable sediment cover extended for 0.73 km2 (28°13'N, 85°30'E). Additionally, measuring the decrease in normalized coherence quantifies the similarity between the pre- and post-event surface features, after the removal of numerous noise patches by focal statistics. Calculations within the study area revealed high-value areas corresponding to the visually identified sediment area. Visual interpretation of the amplitude images and the coherence calculations thus produce similar extractions of collapse sediment. (2) Visual interpretation of high-resolution satellite imagery suggests multiple layers of sediment with different physical properties. A DigitalGlobe satellite, WorldView-3, observed the Langtang Valley on May 8, 2015, using a panchromatic sensor with a spatial resolution of 0.3 m. Identification and mapping of avalanche-induced surface features were performed manually. The surface features were classified into 15 segments on the basis of sediment features, including darkness, the dominance of scattering or flowing features, and the recognition of boulders. Together, these characteristics suggest various combinations of physical properties, such as viscosity, density, and ice and snow content. (3) Altitude differences between the pre- and post-quake digital surface models (DSM) suggest the deposition of 5.2×105 m3 of sediment, mainly along the river bed. A 5 m-grid pre-event DSM was generated from PRISM stereo-pair images acquired on October 12, 2008. A 2 m-grid post-event DSM was generated from WorldView-3 images acquired on May 8, 2015. Comparing the two DSMs, a vertical difference of up to 22±13 m is observed, mainly along the river bed. Estimates of the total avalanched volume reach 5.2×105 m^3, with a possible range of 3.7×105 to 10.7×105 m^3.
Video Voiding Device for Diagnosing Lower Urinary Tract Dysfunction in Men.
Shokoueinejad, Mehdi; Alkashgari, Rayan; Mosli, Hisham A; Alothmany, Nazeeh; Levin, Jacob M; Webster, John G
2017-01-01
We introduce a novel diagnostic Visual Voiding Device (VVD), which has the ability to visually document urinary voiding events and calculate key voiding parameters such as instantaneous flow rate. The observation of the urinary voiding process along with the instantaneous flow rate can be used to diagnose symptoms of Lower Urinary Tract Dysfunction (LUTD) and improve evaluation of LUTD treatments by providing subsequent follow-up documentations of voiding events after treatments. The VVD enables a patient to have a urinary voiding event in privacy while a urologist monitors, processes, and documents the event from a distance. The VVD consists of two orthogonal cameras which are used to visualize urine leakage from the urethral meatus, urine stream trajectory, and its break-up into droplets. A third, lower back camera monitors a funnel topped cylinder where urine accumulates that contains a floater for accurate readings regardless of the urine color. Software then processes the change in level of accumulating urine in the cylinder and the visual flow properties to calculate urological parameters. Video playback allows for reexamination of the voiding process. The proposed device was tested by integrating a mass flowmeter into the setup and simultaneously measuring the instantaneous flow rate of a predetermined voided volume in order to verify the accuracy of VVD compared to the mass flowmeter. The VVD and mass flowmeter were found to have an accuracy of ±2 and ±3% relative to full scale, respectively. A VVD clinical trial was conducted on 16 healthy male volunteers ages 23-65.
Social and Political Event Data to Support Army Requirements: Volume 1
2017-11-01
available information . Geographic data at the city level is not enough spatial fidelity for tactical-level analyses. Vi- olent Events Socio-Cultural...analyze and/or visualize the data to produce mission-relevant information . Hand-coded datasets can be more precise, but they require added time and labor...Figure 4. Process to transform event data into mission-relevant information . ................... 35 Tables Table 1. Sources of event data
Infants’ Looking to Surprising Events: When Eye-Tracking Reveals More than Looking Time
Yeung, H. Henny; Denison, Stephanie; Johnson, Scott P.
2016-01-01
Research on infants’ reasoning abilities often rely on looking times, which are longer to surprising and unexpected visual scenes compared to unsurprising and expected ones. Few researchers have examined more precise visual scanning patterns in these scenes, and so, here, we recorded 8- to 11-month-olds’ gaze with an eye tracker as we presented a sampling event whose outcome was either surprising, neutral, or unsurprising: A red (or yellow) ball was drawn from one of three visible containers populated 0%, 50%, or 100% with identically colored balls. When measuring looking time to the whole scene, infants were insensitive to the likelihood of the sampling event, replicating failures in similar paradigms. Nevertheless, a new analysis of visual scanning showed that infants did spend more time fixating specific areas-of-interest as a function of the event likelihood. The drawn ball and its associated container attracted more looking than the other containers in the 0% condition, but this pattern was weaker in the 50% condition, and even less strong in the 100% condition. Results suggest that measuring where infants look may be more sensitive than simply how much looking there is to the whole scene. The advantages of eye tracking measures over traditional looking measures are discussed. PMID:27926920
Perceptual Visual Grouping under Inattention: Electrophysiological Functional Imaging
ERIC Educational Resources Information Center
Razpurker-Apfeld, Irene; Pratt, Hillel
2008-01-01
Two types of perceptual visual grouping, differing in complexity of shape formation, were examined under inattention. Fourteen participants performed a similarity judgment task concerning two successive briefly presented central targets surrounded by task-irrelevant simple and complex grouping patterns. Event-related potentials (ERPs) were…
Curriculum: Managed Visual Reality.
ERIC Educational Resources Information Center
Gueulette, David G.
This paper explores the association between the symbolized and the actualized, beginning with the prehistoric notion of a "reality double," in which no practical difference exists between pictorial representations, visual symbols, and real-life events and situations. Alchemists of the Middle Ages, with their paradoxical vision of the…
Manfredi, Mirella; Cohn, Neil; Kutas, Marta
2017-06-01
Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual narrative sequence (comics). We compared Onomatopoeic words, which phonetically imitate action sounds (Pow!), with Descriptive words, which describe an action (Punch!), that were (in)congruent within their sequence contexts. Across two experiments, larger N400s appeared to Anomalous Onomatopoeic or Descriptive critical panels than to their congruent counterparts, reflecting a difficulty in semantic access/retrieval. Also, Descriptive words evinced a greater late frontal positivity compared to Onomatopoetic words, suggesting that, though plausible, they may be less predictable/expected in visual narratives. Our results indicate that uni-sensory cross-model integration of word/letter-symbol strings within visual narratives elicit ERP patterns typically observed for written sentence processing, thereby suggesting the engagement of similar domain-independent integration/interpretation mechanisms. Copyright © 2017 Elsevier Inc. All rights reserved.
Ince, Robin A. A.; Jaworska, Katarzyna; Gross, Joachim; Panzeri, Stefano; van Rijsbergen, Nicola J.; Rousselet, Guillaume A.; Schyns, Philippe G.
2016-01-01
A key to understanding visual cognition is to determine “where”, “when”, and “how” brain responses reflect the processing of the specific visual features that modulate categorization behavior—the “what”. The N170 is the earliest Event-Related Potential (ERP) that preferentially responds to faces. Here, we demonstrate that a paradigmatic shift is necessary to interpret the N170 as the product of an information processing network that dynamically codes and transfers face features across hemispheres, rather than as a local stimulus-driven event. Reverse-correlation methods coupled with information-theoretic analyses revealed that visibility of the eyes influences face detection behavior. The N170 initially reflects coding of the behaviorally relevant eye contralateral to the sensor, followed by a causal communication of the other eye from the other hemisphere. These findings demonstrate that the deceptively simple N170 ERP hides a complex network information processing mechanism involving initial coding and subsequent cross-hemispheric transfer of visual features. PMID:27550865
Manfredi, Mirella; Cohn, Neil; Kutas, Marta
2017-01-01
Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual narrative sequence (comics). We compared Onomatopoeic words, which phonetically imitate action sounds (Pow!), with Descriptive words, which describe an action (Punch!), that were (in)congruent within their sequence contexts. Across two experiments, larger N400s appeared to Anomalous Onomatopoeic or Descriptive critical panels than to their congruent counterparts, reflecting a difficulty in semantic access/retrieval. Also, Descriptive words evinced a greater late frontal positivity compared to Onomatopoetic words, suggesting that, though plausible, they may be less predictable/expected in visual narratives. Our results indicate that uni-sensory cross-model integration of word/letter-symbol strings within visual narratives elicit ERP patterns typically observed for written sentence processing, thereby suggesting the engagement of similar domain-independent integration/interpretation mechanisms. PMID:28242517
Age, Intelligence, and Event-Related Brain Potentials during Late Childhood: A Longitudinal Study.
ERIC Educational Resources Information Center
Stauder, Johannes E. A.; van der Molen, Maurits W.; Molenaar, Peter C. M.
2003-01-01
Studied the relationship between event-related brain activity, age, and intelligence using a visual oddball task presented to girls at 9, 10, and 11 years of age. Findings for 26 girls suggest a qualitative shift in the relation between event-related brain activity and intelligence between 9 and 10 years of age. (SLD)
NASA Astrophysics Data System (ADS)
Garcia, S.; Karplus, M. S.; Farrell, J.; Lin, F. C.; Smith, R. B.
2017-12-01
A large seismic nodal array incorporating 133 three-component, 5-Hz geophones deployed for two weeks in early November 2015 in the Upper Geyser Basin recorded earthquake and hydrothermal activity. The University of Utah, the University of Texas at El Paso, and Yellowstone National Park collaborated to deploy Fairfield Nodal ZLand 3-C geophones concentrically centered around the Old Faithful Geyser with an average station spacing of 50 m and an aperture of 1 km. The array provided a unique dataset to investigate wave propagation through various fractures and active geysers in a hydrothermal field located over the Yellowstone hotspot. The complicated sub-surface features associated with the hydrothermal field appear to impact earthquake wave propagation in the Upper Geyser Basin and to generate seismic signals. Previous work using ambient noise cross-correlation has found an intricately fractured sub-surface that provides pathways for water beneath parts of the Upper Geyser Basin that likely feed Old Faithful and other nearby geysers and hot springs. For this study, we used the data to create visualizations of local earthquake, teleseismic earthquake, and hydrothermal events as they propagate through the array. These ground motion visualizations allow observation of wave propagation through the geyser field, which may indicate the presence of anomalous structure impacting seismic velocities and attenuation. Three teleseismic events were observed in the data, two 6.9MW earthquakes that occurred off the coast of Coquimbo, Colombia 9,000km from the array and one 6.5MW near the Aleutian Islands 4,500km from the array. All three teleseismic events observed in the data exhibited strong direct P-wave arrivals and several additional phases. One local earthquake event (2.5ML) 100km from the Upper Geyser Basin was also well-recorded by the array. Time-domain spectrograms show the dominant frequencies present in the recordings of these events. The two 6.9MW earthquakes in Chile were one hour apart and offered interesting signals that also included a geyser tremor between the two events.
Discovering and visualizing indirect associations between biomedical concepts
Tsuruoka, Yoshimasa; Miwa, Makoto; Hamamoto, Kaisei; Tsujii, Jun'ichi; Ananiadou, Sophia
2011-01-01
Motivation: Discovering useful associations between biomedical concepts has been one of the main goals in biomedical text-mining, and understanding their biomedical contexts is crucial in the discovery process. Hence, we need a text-mining system that helps users explore various types of (possibly hidden) associations in an easy and comprehensible manner. Results: This article describes FACTA+, a real-time text-mining system for finding and visualizing indirect associations between biomedical concepts from MEDLINE abstracts. The system can be used as a text search engine like PubMed with additional features to help users discover and visualize indirect associations between important biomedical concepts such as genes, diseases and chemical compounds. FACTA+ inherits all functionality from its predecessor, FACTA, and extends it by incorporating three new features: (i) detecting biomolecular events in text using a machine learning model, (ii) discovering hidden associations using co-occurrence statistics between concepts, and (iii) visualizing associations to improve the interpretability of the output. To the best of our knowledge, FACTA+ is the first real-time web application that offers the functionality of finding concepts involving biomolecular events and visualizing indirect associations of concepts with both their categories and importance. Availability: FACTA+ is available as a web application at http://refine1-nactem.mc.man.ac.uk/facta/, and its visualizer is available at http://refine1-nactem.mc.man.ac.uk/facta-visualizer/. Contact: tsuruoka@jaist.ac.jp PMID:21685059
Thomas, Cyril; Didierjean, André; Kuhn, Gustav
2018-04-17
When faced with a difficult question, people sometimes work out an answer to a related, easier question without realizing that a substitution has taken place (e.g., Kahneman, 2011, Thinking, fast and slow. New York, Farrar, Strauss, Giroux). In two experiments, we investigated whether this attribute substitution effect can also affect the interpretation of a simple visual event sequence. We used a magic trick called the 'Flushtration Count Illusion', which involves a technique used by magicians to give the illusion of having seen multiple cards with identical backs, when in fact only the back of one card (the bottom card) is repeatedly shown. In Experiment 1, we demonstrated that most participants are susceptible to the illusion, even if they have the visual and analytical reasoning capacity to correctly process the sequence. In Experiment 2, we demonstrated that participants construct a biased and simplified representation of the Flushtration Count by substituting some attributes of the event sequence. We discussed of the psychological processes underlying this attribute substitution effect. © 2018 The British Psychological Society.
Wang, Qingcui; Guo, Lu; Bao, Ming; Chen, Lihan
2015-01-01
Auditory and visual events often happen concurrently, and how they group together can have a strong effect on what is perceived. We investigated whether/how intra- or cross-modal temporal grouping influenced the perceptual decision of otherwise ambiguous visual apparent motion. To achieve this, we juxtaposed auditory gap transfer illusion with visual Ternus display. The Ternus display involves a multi-element stimulus that can induce either of two different percepts of apparent motion: ‘element motion’ (EM) or ‘group motion’ (GM). In “EM,” the endmost disk is seen as moving back and forth while the middle disk at the central position remains stationary; while in “GM,” both disks appear to move laterally as a whole. The gap transfer illusion refers to the illusory subjective transfer of a short gap (around 100 ms) from the long glide to the short continuous glide when the two glides intercede at the temporal middle point. In our experiments, observers were required to make a perceptual discrimination of Ternus motion in the presence of concurrent auditory glides (with or without a gap inside). Results showed that a gap within a short glide imposed a remarkable effect on separating visual events, and led to a dominant perception of GM as well. The auditory configuration with gap transfer illusion triggered the same auditory capture effect. Further investigations showed that visual interval which coincided with the gap interval (50–230 ms) in the long glide was perceived to be shorter than that within both the short glide and the ‘gap-transfer’ auditory configurations in the same physical intervals (gaps). The results indicated that auditory temporal perceptual grouping takes priority over the cross-modal interaction in determining the final readout of the visual perception, and the mechanism of selective attention on auditory events also plays a role. PMID:26042055
Wang, Qingcui; Guo, Lu; Bao, Ming; Chen, Lihan
2015-01-01
Auditory and visual events often happen concurrently, and how they group together can have a strong effect on what is perceived. We investigated whether/how intra- or cross-modal temporal grouping influenced the perceptual decision of otherwise ambiguous visual apparent motion. To achieve this, we juxtaposed auditory gap transfer illusion with visual Ternus display. The Ternus display involves a multi-element stimulus that can induce either of two different percepts of apparent motion: 'element motion' (EM) or 'group motion' (GM). In "EM," the endmost disk is seen as moving back and forth while the middle disk at the central position remains stationary; while in "GM," both disks appear to move laterally as a whole. The gap transfer illusion refers to the illusory subjective transfer of a short gap (around 100 ms) from the long glide to the short continuous glide when the two glides intercede at the temporal middle point. In our experiments, observers were required to make a perceptual discrimination of Ternus motion in the presence of concurrent auditory glides (with or without a gap inside). Results showed that a gap within a short glide imposed a remarkable effect on separating visual events, and led to a dominant perception of GM as well. The auditory configuration with gap transfer illusion triggered the same auditory capture effect. Further investigations showed that visual interval which coincided with the gap interval (50-230 ms) in the long glide was perceived to be shorter than that within both the short glide and the 'gap-transfer' auditory configurations in the same physical intervals (gaps). The results indicated that auditory temporal perceptual grouping takes priority over the cross-modal interaction in determining the final readout of the visual perception, and the mechanism of selective attention on auditory events also plays a role.
NASA Astrophysics Data System (ADS)
Hyde, Jerald R.
2004-05-01
It is clear to those who ``listen'' to concert halls and evaluate their degree of acoustical success that it is quite difficult to separate the acoustical response at a given seat from the multi-modal perception of the whole event. Objective concert hall data have been collected for the purpose of finding a link with their related subjective evaluation and ultimately with the architectural correlates which produce the sound field. This exercise, while important, tends to miss the point that a concert or opera event utilizes all the senses of which the sound field and visual stimuli are both major contributors to the experience. Objective acoustical factors point to visual input as being significant in the perception of ``acoustical intimacy'' and with the perception of loudness versus distance in large halls. This paper will review the evidence of visual input as a factor in what we ``hear'' and introduce concepts of perceptual constancy, distance perception, static and dynamic visual stimuli, and the general process of the psychology of the integrated experience. A survey of acousticians on their opinions about the auditory-visual aspects of the concert hall experience will be presented. [Work supported in part from the Veneklasen Research Foundation and Veneklasen Associates.
The taste-visual cross-modal Stroop effect: An event-related brain potential study.
Xiao, X; Dupuis-Roy, N; Yang, X L; Qiu, J F; Zhang, Q L
2014-03-28
Event-related potentials (ERPs) were recorded to explore, for the first time, the electrophysiological correlates of the taste-visual cross-modal Stroop effect. Eighteen healthy participants were presented with a taste stimulus and a food image, and asked to categorize the image as "sweet" or "sour" by pressing the relevant button as quickly as possible. Accurate categorization of the image was faster when it was presented with a congruent taste stimulus (e.g., sour taste/image of lemon) than with an incongruent one (e.g., sour taste/image of ice cream). ERP analyses revealed a negative difference component (ND430-620) between 430 and 620ms in the taste-visual cross-modal Stroop interference. Dipole source analysis of the difference wave (incongruent minus congruent) indicated that two generators localized in the prefrontal cortex and the parahippocampal gyrus contributed to this taste-visual cross-modal Stroop effect. This result suggests that the prefrontal cortex is associated with the process of conflict control in the taste-visual cross-modal Stroop effect. Also, we speculate that the parahippocampal gyrus is associated with the process of discordant information in the taste-visual cross-modal Stroop effect. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Ocuna, M. H.; Ogilvie, K. W.; Baker, D. N.; Curtis, S. A.; Fairfield, D. H.; Mish, W. H.
2000-01-01
The Global Geospace Science Program (GGS) is designed to improve greatly the understanding of the flow of energy, mass and momentum in the solar-terrestrial environment with particular emphasis on "Geospace". The Global Geospace Science Program is the US contribution to the International Solar-Terrestrial Physics (ISTP) Science Initiative. This CD-ROM issue describes the WIND and POLAR spacecraft, the scientific experiments carried onboard, the Theoretical and Ground Based investigations which constitute the US Global Geospace Science Program and the ISTP Data Systems which support the data acquisition and analysis effort. The International Solar-Terrestrial Physics Program (ISTP) Key Parameter Visualization Tool (KPVT), provided on the CD-ROM, was developed at the ISTP Science Planning and Operations Facility (SPOF). The KPVT is a generic software package for visualizing the key parameter data produced from all ISTP missions, interactively and simultaneously. The tool is designed to facilitate correlative displays of ISTP data from multiple spacecraft and instruments, and thus the selection of candidate events and data quality control. The software, written in IDL, includes a graphical/widget user interface, and runs on many platforms, including various UNIX workstations, Alpha/Open VMS, Macintosh (680x0 and PowerPC), and PC/Windows NT, Windows 3.1, and Windows 95.
NASA Technical Reports Server (NTRS)
Ocuna, M. H.; Ogilvie, K. W.; Baker, D. N.; Curtis, S. A.; Fairfield, D. H.; Mish, W. H.
2001-01-01
The Global Geospace Science Program (GGS) is designed to improve greatly the understanding of the flow of energy, mass and momentum in the solar-terrestrial environment with particular emphasis on "Geospace". The Global Geospace Science Program is the US contribution to the International Solar-Terrestrial Physics (ISTP) Science Initiative. This CD-ROM issue describes the WIND and POLAR spacecraft, the scientific experiments carried onboard, the Theoretical and Ground Based investigations which constitute the US Global Geospace Science Program and the ISTP Data Systems which support the data acquisition and analysis effort. The International Solar-Terrestrial Physics Program (ISTP) Key Parameter Visualization Tool (KPVT), provided on the CD-ROM, was developed at the ISTP Science Planning and Operations Facility (SPOF). The KPVT is a generic software package for visualizing the key parameter data produced from all ISTP missions, interactively and simultaneously. The tool is designed to facilitate correlative displays of ISTP data from multiple spacecraft and instruments, and thus the selection of candidate events and data quality control. The software, written in IDL, includes a graphical/widget user interface, and runs on many platforms, including various UNIX workstations, Alpha/Open VMS, Macintosh (680x0 and PowerPC), and PC/Windows NT, Windows 3.1, and Windows 95.
NASA Technical Reports Server (NTRS)
Ocuna, M. H.; Ogilvie, K. W.; Baker, D. N.; Curtis, S. A.; Fairfield, D. H.; Mish, W. H.
1999-01-01
The Global Geospace Science Program (GGS) is designed to improve greatly the understanding of the flow of energy, mass and momentum in the solar-terrestrial environment with particular emphasis on "Geospace". The Global Geospace Science Program is the US contribution to the International Solar-Terrestrial Physics (ISTP) Science Initiative. This CD-ROM issue describes the WIND and POLAR spacecraft, the scientific experiments carried onboard, the Theoretical and Ground Based investigations which constitute the US Global Geospace Science Program and the ISTP Data Systems which support the data acquisition and analysis effort. The International Solar-Terrestrial Physics Program (ISTP) Key Parameter Visualization Tool (KPVT), provided on the CD-ROM, was developed at the ISTP Science Planning and Operations Facility (SPOF). The KPVT is a generic software package for visualizing the key parameter data produced from all ISTP missions, interactively and simultaneously. The tool is designed to facilitate correlative displays of ISTP data from multiple spacecraft and instruments, and thus the selection of candidate events and data quality control. The software, written in IDL, includes a graphical/widget user interface, and runs on many platforms, including various UNIX workstations, Alpha/Open VMS, Macintosh (680x0 and PowerPC), and PC/Windows NT, Windows 3.1, and Windows 95.
NASA Technical Reports Server (NTRS)
Ocuna, M. H.; Ogilvie, K. W.; Baker, D. N.; Curtis, S. A.; Fairfield, D. H.; Mish, W. H.
2000-01-01
The Global Geospace Science Program (GGS) is designed to improve greatly the understanding of the flow of energy, mass and momentum in the solar-terrestrial environment with particular emphasis on "Geospace". The Global Geospace Science Program is the US contribution to the International Solar-Terrestrial Physics (ISTP) Science Initiative. This CD-ROM issue describes the WIND and POLAR spacecraft, the scientific experiments carried onboard, the Theoretical and Ground Based investigations which constitute the US Global Geospace Science Program and the ISTP Data Systems which support the data acquisition and analysis effort. The International Solar-Terrestrial Physics Program (ISTP) Key Parameter Visualization Tool (KPVT), provided on the CD-ROM, was developed at the ISTP Science Planning and Operations Facility (SPOF). The KPVT is a generic software package for visualizing the key parameter data produced from all ISTP missions, interactively and simultaneously. The tool is designed to facilitate correlative displays of ISTP data from multiple spacecraft and instruments, and thus the selection of candidate events and data quality control. The software, written in IDL, includes a graphical/widget user interface, and runs on many platforms, including various UNIX workstations, Alpha/Open VMS, Macintosh (680x0 and PowerPC), and PC/Windows NT, Windows 3.1, and Windows 95.
Three-Dimensional Online Visualization and Engagement Tools for the Geosciences
NASA Astrophysics Data System (ADS)
Cockett, R.; Moran, T.; Pidlisecky, A.
2013-12-01
Educational tools often sacrifice interactivity in favour of scalability so they can reach more users. This compromise leads to tools that may be viewed as second tier when compared to more engaging activities performed in a laboratory; however, the resources required to deliver laboratory exercises that are scalable is often impractical. Geoscience education is well situated to benefit from interactive online learning tools that allow users to work in a 3D environment. Visible Geology (http://3ptscience.com/visiblegeology) is an innovative web-based application designed to enable visualization of geologic structures and processes through the use of interactive 3D models. The platform allows users to conceptualize difficult, yet important geologic principles in a scientifically accurate manner by developing unique geologic models. The environment allows students to interactively practice their visualization and interpretation skills by creating and interacting with their own models and terrains. Visible Geology has been designed from a user centric perspective resulting in a simple and intuitive interface. The platform directs students to build there own geologic models by adding beds and creating geologic events such as tilting, folding, or faulting. The level of ownership and interactivity encourages engagement, leading learners to discover geologic relationships on their own, in the context of guided assignments. In January 2013, an interactive geologic history assignment was developed for a 700-student introductory geology class at The University of British Columbia. The assignment required students to distinguish the relative age of geologic events to construct a geologic history. Traditionally this type of exercise has been taught through the use of simple geologic cross-sections showing crosscutting relationships; from these cross-sections students infer the relative age of geologic events. In contrast, the Visible Geology assignment offers students a unique experience where they first create their own geologic events allowing them to directly see how the timing of a geologic event manifests in the model and resulting cross-sections. By creating each geologic event in the model themselves, the students gain a deeper understanding of the processes and relative order of events. The resulting models can be shared amongst students, and provide instructors with a basis for guiding inquiry to address misconceptions. The ease of use of the assignment, including automatic assessment, made this tool practical for deployment in this 700 person class. The outcome of this type of large scale deployment is that students, who would normally not experience a lab exercise, gain exposure to interactive 3D thinking. Engaging tools and software that puts the user in control of their learning experiences is critical for moving to scalable, yet engaging, online learning environments.
Spatial Visualization in Introductory Geology Courses
NASA Astrophysics Data System (ADS)
Reynolds, S. J.
2004-12-01
Visualization is critical to solving most geologic problems, which involve events and processes across a broad range of space and time. Accordingly, spatial visualization is an essential part of undergraduate geology courses. In such courses, students learn to visualize three-dimensional topography from two-dimensional contour maps, to observe landscapes and extract clues about how that landscape formed, and to imagine the three-dimensional geometries of geologic structures and how these are expressed on the Earth's surface or on geologic maps. From such data, students reconstruct the geologic history of areas, trying to visualize the sequence of ancient events that formed a landscape. To understand the role of visualization in student learning, we developed numerous interactive QuickTime Virtual Reality animations to teach students the most important visualization skills and approaches. For topography, students can spin and tilt contour-draped, shaded-relief terrains, flood virtual landscapes with water, and slice into terrains to understand profiles. To explore 3D geometries of geologic structures, they interact with virtual blocks that can be spun, sliced into, faulted, and made partially transparent to reveal internal structures. They can tilt planes to see how they interact with topography, and spin and tilt geologic maps draped over digital topography. The GeoWall system allows students to see some of these materials in true stereo. We used various assessments to research the effectiveness of these materials and to document visualization strategies students use. Our research indicates that, compared to control groups, students using such materials improve more in their geologic visualization abilities and in their general visualization abilities as measured by a standard spatial visualization test. Also, females achieve greater gains, improving their general visualization abilities to the same level as males. Misconceptions that students carry obstruct learning, but are largely undocumented. Many students, for example, cannot visualize that the landscape in which rock layers were deposited was different than the landscape in which the rocks are exposed today, even in the Grand Canyon.
People can understand descriptions of motion without activating visual motion brain regions
Dravida, Swethasri; Saxe, Rebecca; Bedny, Marina
2013-01-01
What is the relationship between our perceptual and linguistic neural representations of the same event? We approached this question by asking whether visual perception of motion and understanding linguistic depictions of motion rely on the same neural architecture. The same group of participants took part in two language tasks and one visual task. In task 1, participants made semantic similarity judgments with high motion (e.g., “to bounce”) and low motion (e.g., “to look”) words. In task 2, participants made plausibility judgments for passages describing movement (“A centaur hurled a spear … ”) or cognitive events (“A gentleman loved cheese …”). Task 3 was a visual motion localizer in which participants viewed animations of point-light walkers, randomly moving dots, and stationary dots changing in luminance. Based on the visual motion localizer we identified classic visual motion areas of the temporal (MT/MST and STS) and parietal cortex (inferior and superior parietal lobules). We find that these visual cortical areas are largely distinct from neural responses to linguistic depictions of motion. Motion words did not activate any part of the visual motion system. Motion passages produced a small response in the right superior parietal lobule, but none of the temporal motion regions. These results suggest that (1) as compared to words, rich language stimuli such as passages are more likely to evoke mental imagery and more likely to affect perceptual circuits and (2) effects of language on the visual system are more likely in secondary perceptual areas as compared to early sensory areas. We conclude that language and visual perception constitute distinct but interacting systems. PMID:24009592
The Convolutional Visual Network for Identification and Reconstruction of NOvA Events
DOE Office of Scientific and Technical Information (OSTI.GOV)
Psihas, Fernanda
In 2016 the NOvA experiment released results for the observation of oscillations in the vμ and ve channels as well as ve cross section measurements using neutrinos from Fermilab’s NuMI beam. These and other measurements in progress rely on the accurate identification and reconstruction of the neutrino flavor and energy recorded by our detectors. This presentation describes the first application of convolutional neural network technology for event identification and reconstruction in particle detectors like NOvA. The Convolutional Visual Network (CVN) Algorithm was developed for identification, categorization, and reconstruction of NOvA events. It increased the selection efficiency of the ve appearancemore » signal by 40% and studies show potential impact to the vμ disappearance analysis.« less
Singh, Tarkeshwar; Perry, Christopher M; Herter, Troy M
2016-01-26
Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal plane with variable depth.
Toosi, Tahereh; K Tousi, Ehsan; Esteky, Hossein
2017-08-01
Time is an inseparable component of every physical event that we perceive, yet it is not clear how the brain processes time or how the neuronal representation of time affects our perception of events. Here we asked subjects to perform a visual discrimination task while we changed the temporal context in which the stimuli were presented. We collected electroencephalography (EEG) signals in two temporal contexts. In predictable blocks stimuli were presented after a constant delay relative to a visual cue, and in unpredictable blocks stimuli were presented after variable delays relative to the visual cue. Four subsecond delays of 83, 150, 400, and 800 ms were used in the predictable and unpredictable blocks. We observed that predictability modulated the power of prestimulus alpha oscillations in the parieto-occipital sites: alpha power increased in the 300-ms window before stimulus onset in the predictable blocks compared with the unpredictable blocks. This modulation only occurred in the longest delay period, 800 ms, in which predictability also improved the behavioral performance of the subjects. Moreover, learning the temporal context shaped the prestimulus alpha power: modulation of prestimulus alpha power grew during the predictable block and correlated with performance enhancement. These results suggest that the brain is able to learn the subsecond temporal context of stimuli and use this to enhance sensory processing. Furthermore, the neural correlate of this temporal prediction is reflected in the alpha oscillations. NEW & NOTEWORTHY It is not well understood how the uncertainty in the timing of an external event affects its processing, particularly at subsecond scales. Here we demonstrate how a predictable timing scheme improves visual processing. We found that learning the predictable scheme gradually shaped the prestimulus alpha power. These findings indicate that the human brain is able to extract implicit subsecond patterns in the temporal context of events. Copyright © 2017 the American Physiological Society.
Helbig, Carolin; Bilke, Lars; Bauer, Hans-Stefan; Böttinger, Michael; Kolditz, Olaf
2015-01-01
To achieve more realistic simulations, meteorologists develop and use models with increasing spatial and temporal resolution. The analyzing, comparing, and visualizing of resulting simulations becomes more and more challenging due to the growing amounts and multifaceted character of the data. Various data sources, numerous variables and multiple simulations lead to a complex database. Although a variety of software exists suited for the visualization of meteorological data, none of them fulfills all of the typical domain-specific requirements: support for quasi-standard data formats and different grid types, standard visualization techniques for scalar and vector data, visualization of the context (e.g., topography) and other static data, support for multiple presentation devices used in modern sciences (e.g., virtual reality), a user-friendly interface, and suitability for cooperative work. Instead of attempting to develop yet another new visualization system to fulfill all possible needs in this application domain, our approach is to provide a flexible workflow that combines different existing state-of-the-art visualization software components in order to hide the complexity of 3D data visualization tools from the end user. To complete the workflow and to enable the domain scientists to interactively visualize their data without advanced skills in 3D visualization systems, we developed a lightweight custom visualization application (MEVA - multifaceted environmental data visualization application) that supports the most relevant visualization and interaction techniques and can be easily deployed. Specifically, our workflow combines a variety of different data abstraction methods provided by a state-of-the-art 3D visualization application with the interaction and presentation features of a computer-games engine. Our customized application includes solutions for the analysis of multirun data, specifically with respect to data uncertainty and differences between simulation runs. In an iterative development process, our easy-to-use application was developed in close cooperation with meteorologists and visualization experts. The usability of the application has been validated with user tests. We report on how this application supports the users to prove and disprove existing hypotheses and discover new insights. In addition, the application has been used at public events to communicate research results.
Helbig, Carolin; Bilke, Lars; Bauer, Hans-Stefan; Böttinger, Michael; Kolditz, Olaf
2015-01-01
Background To achieve more realistic simulations, meteorologists develop and use models with increasing spatial and temporal resolution. The analyzing, comparing, and visualizing of resulting simulations becomes more and more challenging due to the growing amounts and multifaceted character of the data. Various data sources, numerous variables and multiple simulations lead to a complex database. Although a variety of software exists suited for the visualization of meteorological data, none of them fulfills all of the typical domain-specific requirements: support for quasi-standard data formats and different grid types, standard visualization techniques for scalar and vector data, visualization of the context (e.g., topography) and other static data, support for multiple presentation devices used in modern sciences (e.g., virtual reality), a user-friendly interface, and suitability for cooperative work. Methods and Results Instead of attempting to develop yet another new visualization system to fulfill all possible needs in this application domain, our approach is to provide a flexible workflow that combines different existing state-of-the-art visualization software components in order to hide the complexity of 3D data visualization tools from the end user. To complete the workflow and to enable the domain scientists to interactively visualize their data without advanced skills in 3D visualization systems, we developed a lightweight custom visualization application (MEVA - multifaceted environmental data visualization application) that supports the most relevant visualization and interaction techniques and can be easily deployed. Specifically, our workflow combines a variety of different data abstraction methods provided by a state-of-the-art 3D visualization application with the interaction and presentation features of a computer-games engine. Our customized application includes solutions for the analysis of multirun data, specifically with respect to data uncertainty and differences between simulation runs. In an iterative development process, our easy-to-use application was developed in close cooperation with meteorologists and visualization experts. The usability of the application has been validated with user tests. We report on how this application supports the users to prove and disprove existing hypotheses and discover new insights. In addition, the application has been used at public events to communicate research results. PMID:25915061
From genes to brain oscillations: is the visual pathway the epigenetic clue to schizophrenia?
González-Hernández, J A; Pita-Alcorta, C; Cedeño, I R
2006-01-01
Molecular data and gene expression data and recently mitochondrial genes and possible epigenetic regulation by non-coding genes is revolutionizing our views on schizophrenia. Genes and epigenetic mechanisms are triggered by cell-cell interaction and by external stimuli. A number of recent clinical and molecular observations indicate that epigenetic factors may be operational in the origin of the illness. Based on the molecular insights, gene expression profiles and epigenetic regulation of gene, we went back to the neurophysiology (brain oscillations) and found a putative role of the visual experiences (i.e. visual stimuli) as epigenetic factor. The functional evidences provided here, establish a direct link between the striate and extrastriate unimodal visual cortex and the neurobiology of the schizophrenia. This result support the hypothesis that 'visual experience' has a potential role as epigenetic factor and contribute to trigger and/or to maintain the progression of the schizophrenia. In this case, candidate genes sensible for the visual 'insult' may be located within the visual cortex including associative areas, while the integrity of the visual pathway before reaching the primary visual cortex is preserved. The same effect can be perceived if target genes are localised within the visual pathway, which actually, is more sensitive for 'insult' during the early life than the cortex per se. If this process affects gene expression at these sites a stably sensory specific 'insult', i.e. distorted visual information, is entering the visual system and expanded to fronto-temporo-parietal multimodal areas even from early maturation periods. The difference in the timing of postnatal neuroanatomical events between such areas and the primary visual cortex in humans (with the formers reaching the same development landmarks later in life than the latter) is 'optimal' to establish an abnormal 'cell- communication' mediated by the visual system that may further interfere with the local physiology. In this context the strategy to search target genes need to be rearrangement and redirected to visual-related genes. Otherwise, psychophysics studies combining functional neuroimage, and electrophysiology are strongly recommended, for the search of epigenetic clues that will allow to carrier gene association studies in schizophrenia.
New pinhole sulcus implant for the correction of irregular corneal astigmatism.
Trindade, Claudio C; Trindade, Bruno C; Trindade, Fernando C; Werner, Liliana; Osher, Robert; Santhiago, Marcony R
2017-10-01
To evaluate the effect on visual acuity of the implantation of a new intraocular pinhole device (Xtrafocus) in cases of irregular corneal astigmatism with significant visual impairment. University of São Paulo, São Paulo, Brazil. Prospective case series. Pseudophakic eyes of patients with irregular corneal astigmatism were treated with the pinhole device. The causes of irregular corneal astigmatism were keratoconus, post radial keratotomy (RK), post-penetrating keratoplasty (PKP), and traumatic corneal laceration. The device was implanted in the ciliary sulcus in a piggyback configuration to minimize the effect of corneal aberrations. Preoperative and postoperative visual parameters were compared. The main outcome variables were manifest refraction, uncorrected and corrected distance and near visual acuities, subjective patient satisfaction, and intraoperative and postoperative adverse events and complications. Twenty-one patients (ages 35 to 85 years) were included. There was statistically significant improvement in uncorrected and corrected (CDVA) distance visual acuities. The median CDVA improved from 20/200 (range 20/800 to 20/60) preoperatively to 20/50 (range 20/200 to 20/20) in the first month postoperatively and remained stable over the following months. Manifest refraction remained unchanged, while a subjective visual performance questionnaire revealed perception of improvement in all the tested working distances. No major complication was observed. One case presented with decentration of the device, which required an additional surgical intervention. The intraocular pinhole device performed well in patients with irregular astigmatism caused by keratoconus, RK, PKP, and traumatic corneal laceration. There was marked improvement in visual function, with high patient satisfaction. Copyright © 2017 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Live cell imaging of the HIV-1 life cycle
Campbell, Edward M.; Hope, Thomas J.
2010-01-01
Technology developed in the past 10 years has dramatically increased the ability of researchers to directly visualize and measure various stages of the HIV type 1 (HIV-1) life cycle. In many cases, imaging-based approaches have filled critical gaps in our understanding of how certain aspects of viral replication occur in cells. Specifically, live cell imaging has allowed a better understanding of dynamic, transient events that occur during HIV-1 replication, including the steps involved in viral fusion, trafficking of the viral nucleoprotein complex in the cytoplasm and even the nucleus during infection and the formation of new virions from an infected cell. In this review, we discuss how researchers have exploited fluorescent microscopy methodologies to observe and quantify these events occurring during the replication of HIV-1 in living cells. PMID:18977142
Renault, Victor; Tost, Jörg; Pichon, Fabien; Wang-Renault, Shu-Fang; Letouzé, Eric; Imbeaud, Sandrine; Zucman-Rossi, Jessica; Deleuze, Jean-François; How-Kit, Alexandre
2017-01-01
Copy number variations (CNV) include net gains or losses of part or whole chromosomal regions. They differ from copy neutral loss of heterozygosity (cn-LOH) events which do not induce any net change in the copy number and are often associated with uniparental disomy. These phenomena have long been reported to be associated with diseases and particularly in cancer. Losses/gains of genomic regions are often correlated with lower/higher gene expression. On the other hand, loss of heterozygosity (LOH) and cn-LOH are common events in cancer and may be associated with the loss of a functional tumor suppressor gene. Therefore, identifying recurrent CNV and cn-LOH events can be important as they may highlight common biological components and give insights into the development or mechanisms of a disease. However, no currently available tools allow a comprehensive whole-genome visualization of recurrent CNVs and cn-LOH in groups of samples providing absolute quantification of the aberrations leading to the loss of potentially important information. To overcome these limitations, we developed aCNViewer (Absolute CNV Viewer), a visualization tool for absolute CNVs and cn-LOH across a group of samples. aCNViewer proposes three graphical representations: dendrograms, bi-dimensional heatmaps showing chromosomal regions sharing similar abnormality patterns, and quantitative stacked histograms facilitating the identification of recurrent absolute CNVs and cn-LOH. We illustrated aCNViewer using publically available hepatocellular carcinomas (HCCs) Affymetrix SNP Array data (Fig 1A). Regions 1q and 8q present a similar percentage of total gains but significantly different copy number gain categories (p-value of 0.0103 with a Fisher exact test), validated by another cohort of HCCs (p-value of 5.6e-7) (Fig 2B). aCNViewer is implemented in python and R and is available with a GNU GPLv3 license on GitHub https://github.com/FJD-CEPH/aCNViewer and Docker https://hub.docker.com/r/fjdceph/acnviewer/. aCNViewer@cephb.fr.
Lott, Gus K; Johnson, Bruce R; Bonow, Robert H; Land, Bruce R; Hoy, Ronald R
2009-01-01
We present g-PRIME, a software based tool for physiology data acquisition, analysis, and stimulus generation in education and research. This software was developed in an undergraduate neurophysiology course and strongly influenced by instructor and student feedback. g-PRIME is a free, stand-alone, windows application coded and "compiled" in Matlab (does not require a Matlab license). g-PRIME supports many data acquisition interfaces from the PC sound card to expensive high throughput calibrated equipment. The program is designed as a software oscilloscope with standard trigger modes, multi-channel visualization controls, and data logging features. Extensive analysis options allow real time and offline filtering of signals, multi-parameter threshold-and-window based event detection, and two-dimensional display of a variety of parameters including event time, energy density, maximum FFT frequency component, max/min amplitudes, and inter-event rate and intervals. The software also correlates detected events with another simultaneously acquired source (event triggered average) in real time or offline. g-PRIME supports parameter histogram production and a variety of elegant publication quality graphics outputs. A major goal of this software is to merge powerful engineering acquisition and analysis tools with a biological approach to studies of nervous system function.
Sports Stars: Analyzing the Performance of Astronomers at Visualization-based Discovery
NASA Astrophysics Data System (ADS)
Fluke, C. J.; Parrington, L.; Hegarty, S.; MacMahon, C.; Morgan, S.; Hassan, A. H.; Kilborn, V. A.
2017-05-01
In this data-rich era of astronomy, there is a growing reliance on automated techniques to discover new knowledge. The role of the astronomer may change from being a discoverer to being a confirmer. But what do astronomers actually look at when they distinguish between “sources” and “noise?” What are the differences between novice and expert astronomers when it comes to visual-based discovery? Can we identify elite talent or coach astronomers to maximize their potential for discovery? By looking to the field of sports performance analysis, we consider an established, domain-wide approach, where the expertise of the viewer (i.e., a member of the coaching team) plays a crucial role in identifying and determining the subtle features of gameplay that provide a winning advantage. As an initial case study, we investigate whether the SportsCode performance analysis software can be used to understand and document how an experienced Hi astronomer makes discoveries in spectral data cubes. We find that the process of timeline-based coding can be applied to spectral cube data by mapping spectral channels to frames within a movie. SportsCode provides a range of easy to use methods for annotation, including feature-based codes and labels, text annotations associated with codes, and image-based drawing. The outputs, including instance movies that are uniquely associated with coded events, provide the basis for a training program or team-based analysis that could be used in unison with discipline specific analysis software. In this coordinated approach to visualization and analysis, SportsCode can act as a visual notebook, recording the insight and decisions in partnership with established analysis methods. Alternatively, in situ annotation and coding of features would be a valuable addition to existing and future visualization and analysis packages.
Douglas, Graeme; Pavey, Sue; Corcoran, Christine; Eperjesi, Frank
2010-11-01
Network 1000 is a UK-based panel survey of a representative sample of adults with registered visual impairment, with the aim of gathering information about people's opinions and circumstances. Participants were interviewed (Survey 1, n = 1007: 2005; Survey 2, n = 922: 2006/07) on a range of topics including the nature of their eye condition, details of other health issues, use of low vision aids (LVAs) and their experiences in eye clinics. Eleven percent of individuals did not know the name of their eye condition. Seventy percent of participants reported having long-term health problems or disabilities in addition to visual impairment and 43% reported having hearing difficulties. Seventy one percent reported using LVAs for reading tasks. Participants who had become registered as visually impaired in the previous 8 years (n = 395) were asked questions about non-medical information received in the eye clinic around that time. Reported information received included advice about 'registration' (48%), low vision aids (45%) and social care routes (43%); 17% reported receiving no information. While 70% of people were satisfied with the information received, this was lower for those of working age (56%) compared with retirement age (72%). Those who recalled receiving additional non-medical information and advice at the time of registration also recalled their experiences more positively. Whilst caution should be applied to the accuracy of recall of past events, the data provide a valuable insight into the types of information and support that visually impaired people feel they would benefit from in the eye clinic. © 2010 The Authors. Ophthalmic and Physiological Optics © 2010 The College of Optometrists.
A novel examination of exposure patterns and posttraumatic stress after a university mass murder.
Liu, Sabrina R; Kia-Keating, Maryam
2018-03-05
Occurring at an alarming rate in the United States, mass violence has been linked to posttraumatic stress symptoms (PTSS) in both direct victims and community members who are indirectly exposed. Identifying what distinct exposure patterns exist and their relation to later PTSS has important clinical implications. The present study determined classes of exposure to an event of mass violence, and if PTSS differed across classes. First- and second-year college students (N = 1,189) participated in a confidential online survey following a mass murder at their university, which assessed event exposure and PTSS 3 months later. Latent class analysis (LCA) was used to empirically determine distinct classes of exposure patterns and links between class membership and PTSS. The final model yielded 4 classes: minimal exposure (55.5% of sample), auditory exposure (29.4% of sample), visual exposure (10% of sample), and interpersonal exposure (5% of sample). More severe direct exposure (i.e., the visual exposure class) was associated with significantly higher levels of PTSS than the auditory exposure or minimal exposure classes, as was the interpersonal exposure class. There were no significant differences in PTSS between the auditory exposure and minimal exposure classes or the visual exposure and interpersonal exposure classes. Results point to the differential impact of exposure categories, and provide empirical evidence for distinguishing among auditory, visual, and interpersonal exposures to events of mass violence on college campuses. Clinical implications suggest that visual and interpersonal exposure may warrant targeted efforts following mass violence. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Shen, Ji; Linn, Marcia C.
2011-08-01
What trajectories do students follow as they connect their observations of electrostatic phenomena to atomic-level visualizations? We designed an electrostatics unit, using the knowledge integration framework to help students link observations and scientific ideas. We analyze how learners integrate ideas about charges, charged particles, energy, and observable events. We compare learning enactments in a typical school and a magnet school in the USA. We use pre-tests, post-tests, embedded notes, and delayed post-tests to capture the trajectories of students' knowledge integration. We analyze how visualizations help students grapple with abstract electrostatics concepts such as induction. We find that overall students gain more sophisticated ideas. They can interpret dynamic, interactive visualizations, and connect charge- and particle-based explanations to interpret observable events. Students continue to have difficulty in applying the energy-based explanation.
Visual traffic jam analysis based on trajectory data.
Wang, Zuchao; Lu, Min; Yuan, Xiaoru; Zhang, Junping; van de Wetering, Huub
2013-12-01
In this work, we present an interactive system for visual analysis of urban traffic congestion based on GPS trajectories. For these trajectories we develop strategies to extract and derive traffic jam information. After cleaning the trajectories, they are matched to a road network. Subsequently, traffic speed on each road segment is computed and traffic jam events are automatically detected. Spatially and temporally related events are concatenated in, so-called, traffic jam propagation graphs. These graphs form a high-level description of a traffic jam and its propagation in time and space. Our system provides multiple views for visually exploring and analyzing the traffic condition of a large city as a whole, on the level of propagation graphs, and on road segment level. Case studies with 24 days of taxi GPS trajectories collected in Beijing demonstrate the effectiveness of our system.
NASA Astrophysics Data System (ADS)
Kim, Kyung Chun; Lee, Sang Joon
2011-06-01
The 14th International Symposium on Flow Visualization (ISFV14) was held in Daegu, Korea, on 21-24 June 2010. There were 304 participants from 17 countries. The state of the art in many aspects of flow visualization was presented and discussed, and a total of 243 papers from 19 countries were presented. Two special lectures and four invited lectures, 48 paper sessions and one poster session were held in five session rooms and in a lobby over four days. Among the paper sessions, those on 'biological flows', 'micro/nano fluidics', 'PIV/PTV' and 'compressible and sonic flows' received great attention from the participants of ISFV14. Special events included presentations of 'The Asanuma Award' and 'The Leonardo Da Vinci Award' to prominent contributors. Awards for photos and movies were given to three scientists for their excellence in flow visualizations. Sixteen papers were selected by the Scientific Committee of ISFV14. After the standard peer review process of this journal, six papers were finally accepted for publication. We wish to thank the editors of MST for making it possible to publish this special feature from ISFV14. We also thank the authors for their careful and insightful work and cooperation in the preparation of revised papers. It will be our pleasure if readers appreciate the hot topics in flow visualization research as a result of this special feature. We also hope that the progress in flow visualization will create new research fields. The 15th International Symposium on Flow Visualization will be held in Minsk, Belarus in 2012. We would like to express sincere thanks to the staff at IOP Publishing for their kind support.
Stekelenburg, Jeroen J; Vroomen, Jean
2012-01-01
In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V < A) were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that this N1 suppression was greater for the spatially congruent stimuli. A very early audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.
The Reappearance of Venus Observed 8 October 2015
NASA Astrophysics Data System (ADS)
Dunham, David W.; Dunham, Joan B.
2018-01-01
The reappearance of Venus on October 8, 2015 offered a unique opportunity to attempt observation of the ashen light of Venus as the unlit side of Venus emerged from behind the dark side of the Moon. The dark side of Venus would be offered to observers without interference from the bright side of Venus or of the Moon. Observations were made from Alice Springs, Australia visually with a 20-cm Schmidt-Cassegrain and with a low-light level surveillance camera on a 25-cm reflector. No evidence of the dark side was noted by the visual observer, the video shows little indication of Venus prior to the bright side reappearance. The conclusion reached is that the ashen light, as it was classically defined, is not observable visually or with small telescopes in the visual regime.The presentation describes the prediction, observation technique, and various analyses by the authors and others to draw conclusions from the data.To date, the authors have been unable to locate any reports of others attempting to observe this unique event. That is a pity since, not only was it interesting for an attempt to verify past observations of the ashen light, it was also a visually stunning event.
Visual and Experiential Learning Opportunities through Geospatial Data
NASA Astrophysics Data System (ADS)
Gardiner, N.; Bulletins, S.
2007-12-01
Global observation data from satellites are essential for both research and education about Earth's climate because they help convey the temporal and spatial scales inherent to the subject, which are beyond most people's experience. Experts in the development of visualizations using spatial data distinguish the process of learning through data exploration from the process of learning by absorbing a story told from beginning to end. The former requires the viewer to absorb complex spatial and temporal dynamics inherent to visualized data and therefore is a process best undertaken by those familiar with the data and processes represented. The latter requires that the viewer understand the intended presentation of concepts, so story telling can be employed to educate viewers with varying backgrounds and familiarity with a given subject. Three examples of climate science education, drawn from the current science program Science Bulletins (American Museum of Natural History, New York, USA), demonstrate the power of visualized global earth observations for climate science education. The first example seeks to explain the potential for sea level rise on a global basis. A short feature film includes the visualized, projected effects of sea level rise at local to global scales; this visualization complements laboratory and field observations of glacier retreat and paleoclimatic reconstructions based on fossilized coral reef analysis, each of which is also depicted in the film. The narrative structure keeps learners focused on discrete scientific concepts. The second example utilizes half-hourly cloud observations to demonstrate weather and climate patterns to audiences on a global basis. Here, the scientific messages are qualitatively simpler, but the viewer must deduce his own complex visual understanding of the visualized data. Finally, we present plans for distributing climate science education products via mediated public events whereby participants learn from climate and geovisualization experts working collaboratively. This last example provides an opportunity for deep exploration of patterns and processes in a live setting and makes full use of complementary talents, including computer science, internet-enabled data sharing, remote sensing image processing, and meteorology. These innovative examples from informal educators serve as powerful pedagogical models to consider for the classroom of the future.
50 CFR 218.174 - Requirements for monitoring and reporting.
Code of Federal Regulations, 2011 CFR
2011-10-01
...-based surveys shall be designed to maximize detections of marine mammals near mission activity event. (2... Navy to implement, at a minimum, the monitoring activities summarized below: (1) Visual Surveys: (i) The Holder of this Authorization shall conduct a minimum of 2 special visual surveys per year to...
Behavioral and Physiological Findings of Gender Differences in Global-Local Visual Processing
ERIC Educational Resources Information Center
Roalf, David; Lowery, Natasha; Turetsky, Bruce I.
2006-01-01
Hemispheric asymmetries in global-local visual processing are well-established, as are gender differences in cognition. Although hemispheric asymmetry presumably underlies gender differences in cognition, the literature on gender differences in global-local processing is sparse. We employed event related brain potential (ERP) recordings during…
In situ visualization for large-scale combustion simulations.
Yu, Hongfeng; Wang, Chaoli; Grout, Ray W; Chen, Jacqueline H; Ma, Kwan-Liu
2010-01-01
As scientific supercomputing moves toward petascale and exascale levels, in situ visualization stands out as a scalable way for scientists to view the data their simulations generate. This full picture is crucial particularly for capturing and understanding highly intermittent transient phenomena, such as ignition and extinction events in turbulent combustion.
ERIC Educational Resources Information Center
Sullivan, Megan
2011-01-01
Visualization is the art of turning information--concepts, processes, events, structures, and trends--into images that support our understanding. Carter Emmart is a visualizer of science. He is the director of astrovisualization for the American Museum of Natural History (AMNH). In this interview, Emmart describes what it is like to be a…
The Earliest Electrophysiological Correlate of Visual Awareness?
ERIC Educational Resources Information Center
Koivisto, Mika; Lahteenmaki, Mikko; Sorensen, Thomas Alrik; Vangkilde, Signe; Overgaard, Morten; Revonsuo, Antti
2008-01-01
To examine the neural correlates and timing of human visual awareness, we recorded event-related potentials (ERPs) in two experiments while the observers were detecting a grey dot that was presented near subjective threshold. ERPs were averaged for conscious detections of the stimulus (hits) and nondetections (misses) separately. Our results…
ERIC Educational Resources Information Center
Brewin, Chris R.; Gregory, James D.; Lipton, Michelle; Burgess, Neil
2010-01-01
Involuntary images and visual memories are prominent in many types of psychopathology. Patients with posttraumatic stress disorder, other anxiety disorders, depression, eating disorders, and psychosis frequently report repeated visual intrusions corresponding to a small number of real or imaginary events, usually extremely vivid, detailed, and…
Why Visual Literacy: Consciousness and Convention
ERIC Educational Resources Information Center
Rezabek, Landra L.
2005-01-01
In this article, the author discusses the intentions of the October 2005 Association for Educational Communications & Technology (AECT) conference. She explains that the conference will be a shared event between the AECT members and the participants of the 37th annual meeting of the International Visual Literacy Association (IVLA), a stalwart…
[Pituitary apoplexy. Report of 25 patients].
Khaldi, M; Ben Hamouda, K; Jemel, H; Kallel, J; Zemmel, I
2006-09-01
A series of 25 patients with a clinical diagnosis of pituitary apoplexy (PA) is reviewed. It included 14 men and 11 women aged between 20 to 79 years (mean age: 54 years). Twenty-two patients did not know that they had a pituitary tumor when the apoplexy occurred. A precipitating event was found in 3 cases. Symptoms and signs ranged from isolated ocular paresis to a deep coma. Seventeen patients experienced a decrease in their visual acuity. CTscan and MRI showed a pituitary adenoma in all cases, a hemorrhage was also present in 10 out of the 24 CTscans, and in all the 8 MRI performed. Twenty patients underwent surgery; 18 of them by a transsphenoidal approach. A complete recovery of visual acuity was observed in 75% of patients operated within the week following the onset of symptoms, and in 56% of patients operated later on. There was no case of complete visual recovery among the blind patients. Pituitary apoplexy is a clinical concept. It applies only to symptomatic cases. It is generally a complication of a pituitary adenoma which is in most cases unknown. There are different degrees of severity; PA can even be life-threatening. The principal aim of surgery in the acute phase is the improvement of visual prognosis. In our series, blind patients or those with a history of visual loss for more than a week or with a blindness had a poorer prognosis.
Using multisensory cues to facilitate air traffic management.
Ngo, Mary K; Pierce, Russell S; Spence, Charles
2012-12-01
In the present study, we sought to investigate whether auditory and tactile cuing could be used to facilitate a complex, real-world air traffic management scenario. Auditory and tactile cuing provides an effective means of improving both the speed and accuracy of participants' performance in a variety of laboratory-based visual target detection and identification tasks. A low-fidelity air traffic simulation task was used in which participants monitored and controlled aircraft.The participants had to ensure that the aircraft landed or exited at the correct altitude, speed, and direction and that they maintained a safe separation from all other aircraft and boundaries. The performance measures recorded included en route time, handoff delay, and conflict resolution delay (the performance measure of interest). In a baseline condition, the aircraft in conflict was highlighted in red (visual cue), and in the experimental conditions, this standard visual cue was accompanied by a simultaneously presented auditory, vibrotactile, or audiotactile cue. Participants responded significantly more rapidly, but no less accurately, to conflicts when presented with an additional auditory or audiotactile cue than with either a vibrotactile or visual cue alone. Auditory and audiotactile cues have the potential for improving operator performance by reducing the time it takes to detect and respond to potential visual target events. These results have important implications for the design and use of multisensory cues in air traffic management.
Terminal weather information management
NASA Technical Reports Server (NTRS)
Lee, Alfred T.
1990-01-01
Since the mid-1960's, microburst/windshear events have caused at least 30 aircraft accidents and incidents and have killed more than 600 people in the United States alone. This study evaluated alternative means of alerting an airline crew to the presence of microburst/windshear events in the terminal area. Of particular interest was the relative effectiveness of conventional and data link ground-to-air transmissions of ground-based radar and low-level windshear sensing information on microburst/windshear avoidance. The Advanced Concepts Flight Simulator located at Ames Research Center was employed in a line oriented simulation of a scheduled round-trip airline flight from Salt Lake City to Denver Stapleton Airport. Actual weather en route and in the terminal area was simulated using recorded data. The microburst/windshear incident of July 11, 1988 was re-created for the Denver area operations. Six experienced airline crews currently flying scheduled routes were employed as test subjects for each of three groups: (1) A baseline group which received alerts via conventional air traffic control (ATC) tower transmissions; (2) An experimental group which received alerts/events displayed visually and aurally in the cockpit six miles (approx. 2 min.) from the microburst event; and (3) An additional experimental group received displayed alerts/events 23 linear miles (approx. 7 min.) from the microburst event. Analyses of crew communications and decision times showed a marked improvement in both situation awareness and decision-making with visually displayed ground-based radar information. Substantial reductions in the variability of decision times among crews in the visual display groups were also found. These findings suggest that crew performance will be enhanced and individual differences among crews due to differences in training and prior experience are significantly reduced by providing real-time, graphic display of terminal weather hazards.
Soto, Axel J; Zerva, Chrysoula; Batista-Navarro, Riza; Ananiadou, Sophia
2018-04-15
Pathway models are valuable resources that help us understand the various mechanisms underpinning complex biological processes. Their curation is typically carried out through manual inspection of published scientific literature to find information relevant to a model, which is a laborious and knowledge-intensive task. Furthermore, models curated manually cannot be easily updated and maintained with new evidence extracted from the literature without automated support. We have developed LitPathExplorer, a visual text analytics tool that integrates advanced text mining, semi-supervised learning and interactive visualization, to facilitate the exploration and analysis of pathway models using statements (i.e. events) extracted automatically from the literature and organized according to levels of confidence. LitPathExplorer supports pathway modellers and curators alike by: (i) extracting events from the literature that corroborate existing models with evidence; (ii) discovering new events which can update models; and (iii) providing a confidence value for each event that is automatically computed based on linguistic features and article metadata. Our evaluation of event extraction showed a precision of 89% and a recall of 71%. Evaluation of our confidence measure, when used for ranking sampled events, showed an average precision ranging between 61 and 73%, which can be improved to 95% when the user is involved in the semi-supervised learning process. Qualitative evaluation using pair analytics based on the feedback of three domain experts confirmed the utility of our tool within the context of pathway model exploration. LitPathExplorer is available at http://nactem.ac.uk/LitPathExplorer_BI/. sophia.ananiadou@manchester.ac.uk. Supplementary data are available at Bioinformatics online.
Chen, Andrew C H; Tang, Yongqiang; Rangaswamy, Madhavi; Wang, Jen C; Almasy, Laura; Foroud, Tatiana; Edenberg, Howard J; Hesselbrock, Victor; Nurnberger, John; Kuperman, Samuel; O'Connor, Sean J; Schuckit, Marc A; Bauer, Lance O; Tischfield, Jay; Rice, John P; Bierut, Laura; Goate, Alison; Porjesz, Bernice
2009-04-05
Evidence suggests the P3 amplitude of the event-related potential and its underlying superimposed event-related oscillations (EROs), primarily in the theta (4-5 Hz) and delta (1-3 Hz) frequencies, as endophenotypes for the risk of alcoholism and other disinhibitory disorders. Major neurochemical substrates contributing to theta and delta rhythms and P3 involve strong GABAergic, cholinergic and glutamatergic system interactions. The aim of this study was to test the potential associations between single nucleotide polymorphisms (SNPs) in glutamate receptor genes and ERO quantitative traits. GRM8 was selected because it maps at chromosome 7q31.3-q32.1 under the peak region where we previously identified significant linkage (peak LOD = 3.5) using a genome-wide linkage scan of the same phenotype (event-related theta band for the target visual stimuli). Neural activities recorded from scalp electrodes during a visual oddball task in which rare target elicited P3s were analyzed in a subset of the Collaborative Study on the Genetics of Alcoholism (COGA) sample comprising 1,049 Caucasian subjects from 209 families (with 472 DSM-IV alcohol dependent individuals). The family-based association test (FBAT) detected significant association (P < 0.05) with multiple SNPs in the GRM8 gene and event-related theta power to target visual stimuli, and also with alcohol dependence, even after correction for multiple comparisons by false discovery rate (FDR). Our results suggest that variation in GRM8 may be involved in modulating event-related theta oscillations during information processing and also in vulnerability to alcoholism. These findings underscore the utility of electrophysiology and the endophenotype approach in the genetic study of psychiatric disorders. (c) 2008 Wiley-Liss, Inc.
Brain Activity During the Encoding, Retention, and Retrieval of Stimulus Representations
de Zubicaray, Greig I.; McMahon, Katie; Wilson, Stephen J.; Muthiah, Santhi
2001-01-01
Studies of delayed nonmatching-to-sample (DNMS) performance following lesions of the monkey cortex have revealed a critical circuit of brain regions involved in forming memories and retaining and retrieving stimulus representations. Using event-related functional magnetic resonance imaging (fMRI), we measured brain activity in 10 healthy human participants during performance of a trial-unique visual DNMS task using novel barcode stimuli. The event-related design enabled the identification of activity during the different phases of the task (encoding, retention, and retrieval). Several brain regions identified by monkey studies as being important for successful DNMS performance showed selective activity during the different phases, including the mediodorsal thalamic nucleus (encoding), ventrolateral prefrontal cortex (retention), and perirhinal cortex (retrieval). Regions showing sustained activity within trials included the ventromedial and dorsal prefrontal cortices and occipital cortex. The present study shows the utility of investigating performance on tasks derived from animal models to assist in the identification of brain regions involved in human recognition memory. PMID:11584070
Towards a high sensitivity small animal PET system based on CZT detectors (Conference Presentation)
NASA Astrophysics Data System (ADS)
Abbaszadeh, Shiva; Levin, Craig
2017-03-01
Small animal positron emission tomography (PET) is a biological imaging technology that allows non-invasive interrogation of internal molecular and cellular processes and mechanisms of disease. New PET molecular probes with high specificity are under development to target, detect, visualize, and quantify subtle molecular and cellular processes associated with cancer, heart disease, and neurological disorders. However, the limited uptake of these targeted probes leads to significant reduction in signal. There is a need to advance the performance of small animal PET system technology to reach its full potential for molecular imaging. Our goal is to assemble a small animal PET system based on CZT detectors and to explore methods to enhance its photon sensitivity. In this work, we reconstruct an image from a phantom using a two-panel subsystem consisting of six CZT crystals in each panel. For image reconstruction, coincidence events with energy between 450 and 570 keV were included. We are developing an algorithm to improve sensitivity of the system by including multiple interaction events.
Zlotnick, Cheryl; Lawental, Maayan; Pud, Dorit
2017-03-01
This study examined the profiles of symptoms and health-related quality of life (QOL) of women in substance abuse treatment, comparing those with higher versus lower histories of adverse childhood events (ACE), and those with versus without current pain. Adult women in outpatient substance abuse treatment (n = 30) completed questionnaires (cross-sectional study) on topics including drug use, adverse childhood events (ACE), QOL, functional ability, current pain, and depression. Women with pain indicated significant differences in emotional (p < 0.05), and functional ability (p < 0.01); but no significant differences were found between women with high versus low levels of ACE. Yet, radar plots of women with both current pain and high levels of ACE, versus those without, portrayed a distinctive profile indicating high levels of anxiety and depression. Rather than a checklist, visual composites of symptoms experienced by women in substance abuse treatment illustrates areas of concern in the overall status of women in substance abuse treatment.
King, Michael J.; Sanchez, Roberto J.; Moss, William C.
2013-03-19
A passive blast pressure sensor for detecting blast overpressures of at least a predetermined minimum threshold pressure. The blast pressure sensor includes a piston-cylinder arrangement with one end of the piston having a detection surface exposed to a blast event monitored medium through one end of the cylinder and the other end of the piston having a striker surface positioned to impact a contact stress sensitive film that is positioned against a strike surface of a rigid body, such as a backing plate. The contact stress sensitive film is of a type which changes color in response to at least a predetermined minimum contact stress which is defined as a product of the predetermined minimum threshold pressure and an amplification factor of the piston. In this manner, a color change in the film arising from impact of the piston accelerated by a blast event provides visual indication that a blast overpressure encountered from the blast event was not less than the predetermined minimum threshold pressure.
StreamExplorer: A Multi-Stage System for Visually Exploring Events in Social Streams.
Wu, Yingcai; Chen, Zhutian; Sun, Guodao; Xie, Xiao; Cao, Nan; Liu, Shixia; Cui, Weiwei
2017-10-18
Analyzing social streams is important for many applications, such as crisis management. However, the considerable diversity, increasing volume, and high dynamics of social streams of large events continue to be significant challenges that must be overcome to ensure effective exploration. We propose a novel framework by which to handle complex social streams on a budget PC. This framework features two components: 1) an online method to detect important time periods (i.e., subevents), and 2) a tailored GPU-assisted Self-Organizing Map (SOM) method, which clusters the tweets of subevents stably and efficiently. Based on the framework, we present StreamExplorer to facilitate the visual analysis, tracking, and comparison of a social stream at three levels. At a macroscopic level, StreamExplorer uses a new glyph-based timeline visualization, which presents a quick multi-faceted overview of the ebb and flow of a social stream. At a mesoscopic level, a map visualization is employed to visually summarize the social stream from either a topical or geographical aspect. At a microscopic level, users can employ interactive lenses to visually examine and explore the social stream from different perspectives. Two case studies and a task-based evaluation are used to demonstrate the effectiveness and usefulness of StreamExplorer.Analyzing social streams is important for many applications, such as crisis management. However, the considerable diversity, increasing volume, and high dynamics of social streams of large events continue to be significant challenges that must be overcome to ensure effective exploration. We propose a novel framework by which to handle complex social streams on a budget PC. This framework features two components: 1) an online method to detect important time periods (i.e., subevents), and 2) a tailored GPU-assisted Self-Organizing Map (SOM) method, which clusters the tweets of subevents stably and efficiently. Based on the framework, we present StreamExplorer to facilitate the visual analysis, tracking, and comparison of a social stream at three levels. At a macroscopic level, StreamExplorer uses a new glyph-based timeline visualization, which presents a quick multi-faceted overview of the ebb and flow of a social stream. At a mesoscopic level, a map visualization is employed to visually summarize the social stream from either a topical or geographical aspect. At a microscopic level, users can employ interactive lenses to visually examine and explore the social stream from different perspectives. Two case studies and a task-based evaluation are used to demonstrate the effectiveness and usefulness of StreamExplorer.
Model system for plant cell biology: GFP imaging in living onion epidermal cells
NASA Technical Reports Server (NTRS)
Scott, A.; Wyatt, S.; Tsou, P. L.; Robertson, D.; Allen, N. S.
1999-01-01
The ability to visualize organelle localization and dynamics is very useful in studying cellular physiological events. Until recently, this has been accomplished using a variety of staining methods. However, staining can give inaccurate information due to nonspecific staining, diffusion of the stain or through toxic effects. The ability to target green fluorescent protein (GFP) to various organelles allows for specific labeling of organelles in vivo. The disadvantages of GFP thus far have been the time and money involved in developing stable transformants or maintaining cell cultures for transient expression. In this paper, we present a rapid transient expression system using onion epidermal peels. We have localized GFP to various cellular compartments (including the cell wall) to illustrate the utility of this method and to visualize dynamics of these compartments. The onion epidermis has large, living, transparent cells in a monolayer, making them ideal for visualizing GFP. This method is easy and inexpensive, and it allows for testing of new GFP fusion proteins in a living tissue to determine deleterious effects and the ability to express before stable transformants are attempted.
Recapitulation of Emotional Source Context during Memory Retrieval
Bowen, Holly J.; Kensinger, Elizabeth A.
2016-01-01
Recapitulation involves the reactivation of cognitive and neural encoding processes at retrieval. In the current study, we investigated the effects of emotional valence on recapitulation processes. Participants encoded neutral words presented on a background face or scene that was negative, positive or neutral. During retrieval, studied and novel neutral words were presented alone (i.e., without the scene or face) and participants were asked to make a remember, know or new judgment. Both the encoding and retrieval tasks were completed in the fMRI scanner. Conjunction analyses were used to reveal the overlap between encoding and retrieval processing. These results revealed that, compared to positive or neutral contexts, words that were recollected and previously encoded in a negative context showed greater encoding-to-retrieval overlap, including in the ventral visual stream and amygdala. Interestingly, the visual stream recapitulation was not enhanced within regions that specifically process faces or scenes but rather extended broadly throughout visual cortices. These findings elucidate how memories for negative events can feel more vivid or detailed than positive or neutral memories. PMID:27923474
NASA Astrophysics Data System (ADS)
Morrison, S. M.; Downs, R. T.; Golden, J. J.; Pires, A.; Fox, P. A.; Ma, X.; Zednik, S.; Eleish, A.; Prabhu, A.; Hummer, D. R.; Liu, C.; Meyer, M.; Ralph, J.; Hystad, G.; Hazen, R. M.
2016-12-01
We have developed a comprehensive database of copper (Cu) mineral characteristics. These data include crystallographic, paragenetic, chemical, locality, age, structural complexity, and physical property information for the 689 Cu mineral species approved by the International Mineralogical Association (rruff.info/ima). Synthesis of this large, varied dataset allows for in-depth exploration of statistical trends and visualization techniques. With social network analysis (SNA) and cluster analysis of minerals, we create sociograms and chord diagrams. SNA visualizations illustrate the relationships and connectivity between mineral species, which often form cliques associated with rock type and/or geochemistry. Using mineral ecology statistics, we analyze mineral-locality frequency distribution and predict the number of missing mineral species, visualized with accumulation curves. By assembly of 2-dimensional KLEE diagrams of co-existing elements in minerals, we illustrate geochemical trends within a mineral system. To explore mineral age and chemical oxidation state, we create skyline diagrams and compare trends with varying chemistry. These trends illustrate mineral redox changes through geologic time and correlate with significant geologic occurrences, such as the Great Oxidation Event (GOE) or Wilson Cycles.
EDS V26 Containment Vessel Explosive Qualification Test Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crocker, Robert W.; Haroldsen, Brent L.; Stofleth, Jerome H.
2013-11-01
The objective of the test was to qualify the vessel for its intended use by subjecting it to a 1.25 times overtest. The criteria for success are that the measured strains do not exceed the calculated strains from the vessel analysis, there is no significant additional plastic strain on subsequent tests at the rated design load (shakedown), and there is no significant damage to the vessel and attached hardware that affect form, fit, or function. Testing of the V25 Vessel in 2011 established a precedent for testing V26 [2]. As with V25, two tests were performed to satisfy this objective.more » The first test used 9 pounds of Composition C-4 (11.25 lbs. TNT-equivalent), which is 125 percent of the design basis load. The second test used 7.2 pounds of Composition C-4 (9 lbs. TNT-equivalent) which is 100 percent of the design basis load. The first test provided the required overtest while the second test served to demonstrate shakedown and the absence of additional plastic deformation. Unlike the V25 vessel, which was mounted in a shipping cradle during testing, the V26 vessel was mounted on the EDS P2U3 trailer prior to testing. Visual inspections of the EDS vessel, surroundings, and diagnostics were completed before and after each test event. This visual inspection included analyzing the seals, fittings, and interior surfaces of the EDS vessel and documenting any abnormalities or damages. Photographs were used to visually document vessel conditions and findings before and after each test event.« less
In a Time of Change: Integrating the Arts and Humanities with Climate Change Science in Alaska
NASA Astrophysics Data System (ADS)
Leigh, M.; Golux, S.; Franzen, K.
2011-12-01
The arts and humanities have a powerful capacity to create lines of communication between the public, policy and scientific spheres. A growing network of visual and performing artists, writers and scientists has been actively working together since 2007 to integrate scientific and artistic perspectives on climate change in interior Alaska. These efforts have involved field workshops and collaborative creative processes culminating in public performances and a visual art exhibit. The most recent multimedia event was entitled In a Time of Change: Envisioning the Future, and challenged artists and scientists to consider future scenarios of climate change. This event included a public performance featuring original theatre, modern dance, Alaska Native Dance, poetry and music that was presented concurrently with an art exhibit featuring original works by 24 Alaskan visual artists. A related effort targeted K12 students, through an early college course entitled Climate Change and Creative Expression, which was offered to high school students at a predominantly Alaska Native charter school and integrated climate change science, creative writing, theatre and dance. Our program at Bonanza Creek Long Term Ecological Research (LTER) site is just one of many successful efforts to integrate arts and humanities with science within and beyond the NSF LTER Program. The efforts of various LTER sites to engage the arts and humanities with science, the public and policymakers have successfully generated excitement, facilitated mutual understanding, and promoted meaningful dialogue on issues facing science and society. The future outlook for integration of arts and humanities with science appears promising, with increasing interest from artists, scientists and scientific funding agencies.
NASA Astrophysics Data System (ADS)
de Groot, R. M.; Benthien, M. L.
2006-12-01
The Southern California Earthquake Center (SCEC) has been developing groundbreaking computer modeling capabilities for studying earthquakes. These visualizations were initially shared within the scientific community but have recently have gained visibility via television news coverage in Southern California. These types of visualizations are becoming pervasive in the teaching and learning of concepts related to earth science. Computers have opened up a whole new world for scientists working with large data sets, and students can benefit from the same opportunities (Libarkin &Brick, 2002). Earthquakes are ideal candidates for visualization products: they cannot be predicted, are completed in a matter of seconds, occur deep in the earth, and the time between events can be on a geologic time scale. For example, the southern part of the San Andreas fault has not seen a major earthquake since about 1690, setting the stage for an earthquake as large as magnitude 7.7 -- the "big one." Since no one has experienced such an earthquake, visualizations can help people understand the scale of such an event. Accordingly, SCEC has developed a revolutionary simulation of this earthquake, with breathtaking visualizations that are now being distributed. According to Gordin and Pea (1995), theoretically visualization should make science accessible, provide means for authentic inquiry, and lay the groundwork to understand and critique scientific issues. This presentation will discuss how the new SCEC visualizations and other earthquake imagery achieve these results, how they fit within the context of major themes and study areas in science communication, and how the efficacy of these tools can be improved.
The cradle of causal reasoning: newborns' preference for physical causality.
Mascalzoni, Elena; Regolin, Lucia; Vallortigara, Giorgio; Simion, Francesca
2013-05-01
Perception of mechanical (i.e. physical) causality, in terms of a cause-effect relationship between two motion events, appears to be a powerful mechanism in our daily experience. In spite of a growing interest in the earliest causal representations, the role of experience in the origin of this sensitivity is still a matter of dispute. Here, we asked the question about the innate origin of causal perception, never tested before at birth. Three experiments were carried out to investigate sensitivity at birth to some visual spatiotemporal cues present in a launching event. Newborn babies, only a few hours old, showed that they significantly preferred a physical causality event (i.e. Michotte's Launching effect) when matched to a delay event (i.e. a delayed launching; Experiment 1) or to a non-causal event completely identical to the causal one except for the order of the displacements of the two objects involved which was swapped temporally (Experiment 3). This preference for the launching event, moreover, also depended on the continuity of the trajectory between the objects involved in the event (Experiment 2). These results support the hypothesis that the human system possesses an early available, possibly innate basic mechanism to compute causality, such a mechanism being sensitive to the additive effect of certain well-defined spatiotemporal cues present in the causal event independently of any prior visual experience. © 2013 Blackwell Publishing Ltd.
Fong, Allan; Harriott, Nicole; Walters, Donna M; Foley, Hanan; Morrissey, Richard; Ratwani, Raj R
2017-08-01
Many healthcare providers have implemented patient safety event reporting systems to better understand and improve patient safety. Reviewing and analyzing these reports is often time consuming and resource intensive because of both the quantity of reports and length of free-text descriptions in the reports. Natural language processing (NLP) experts collaborated with clinical experts on a patient safety committee to assist in the identification and analysis of medication related patient safety events. Different NLP algorithmic approaches were developed to identify four types of medication related patient safety events and the models were compared. Well performing NLP models were generated to categorize medication related events into pharmacy delivery delays, dispensing errors, Pyxis discrepancies, and prescriber errors with receiver operating characteristic areas under the curve of 0.96, 0.87, 0.96, and 0.81 respectively. We also found that modeling the brief without the resolution text generally improved model performance. These models were integrated into a dashboard visualization to support the patient safety committee review process. We demonstrate the capabilities of various NLP models and the use of two text inclusion strategies at categorizing medication related patient safety events. The NLP models and visualization could be used to improve the efficiency of patient safety event data review and analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
Earth History databases and visualization - the TimeScale Creator system
NASA Astrophysics Data System (ADS)
Ogg, James; Lugowski, Adam; Gradstein, Felix
2010-05-01
The "TimeScale Creator" team (www.tscreator.org) and the Subcommission on Stratigraphic Information (stratigraphy.science.purdue.edu) of the International Commission on Stratigraphy (www.stratigraphy.org) has worked with numerous geoscientists and geological surveys to prepare reference datasets for global and regional stratigraphy. All events are currently calibrated to Geologic Time Scale 2004 (Gradstein et al., 2004, Cambridge Univ. Press) and Concise Geologic Time Scale (Ogg et al., 2008, Cambridge Univ. Press); but the array of intercalibrations enable dynamic adjustment to future numerical age scales and interpolation methods. The main "global" database contains over 25,000 events/zones from paleontology, geomagnetics, sea-level and sequence stratigraphy, igneous provinces, bolide impacts, plus several stable isotope curves and image sets. Several regional datasets are provided in conjunction with geological surveys, with numerical ages interpolated using a similar flexible inter-calibration procedure. For example, a joint program with Geoscience Australia has compiled an extensive Australian regional biostratigraphy and a full array of basin lithologic columns with each formation linked to public lexicons of all Proterozoic through Phanerozoic basins - nearly 500 columns of over 9,000 data lines plus hot-curser links to oil-gas reference wells. Other datapacks include New Zealand biostratigraphy and basin transects (ca. 200 columns), Russian biostratigraphy, British Isles regional stratigraphy, Gulf of Mexico biostratigraphy and lithostratigraphy, high-resolution Neogene stable isotope curves and ice-core data, human cultural episodes, and Circum-Arctic stratigraphy sets. The growing library of datasets is designed for viewing and chart-making in the free "TimeScale Creator" JAVA package. This visualization system produces a screen display of the user-selected time-span and the selected columns of geologic time scale information. The user can change the vertical-scale, column widths, fonts, colors, titles, ordering, range chart options and many other features. Mouse-activated pop-ups provide additional information on columns and events; including links to external Internet sites. The graphics can be saved as SVG (scalable vector graphics) or PDF files for direct import into Adobe Illustrator or other common drafting software. Users can load additional regional datapacks, and create and upload their own datasets. The "Pro" version has additional dataset-creation tools, output options and the ability to edit and re-save merged datasets. The databases and visualization package are envisioned as a convenient reference tool, chart-production assistant, and a window into the geologic history of our planet.
Near Real Time Analytics of Human Sensor Networks in the Realm of Big Data
NASA Astrophysics Data System (ADS)
Aulov, O.; Halem, M.
2012-12-01
With the prolific development of social media, emergency responders have an increasing interest in harvesting social media from outlets such as Flickr, Twitter, and Facebook, in order to assess the scale and specifics of extreme events including wild fires, earthquakes, terrorist attacks, oil spills, etc. A number of experimental platforms have successfully been implemented to demonstrate the utilization of social media data in extreme events, including Twitter Earthquake Detector, which relied on tweets for earthquake monitoring; AirTwitter, which used tweets for air quality reporting; and our previous work, using Flickr data as boundary value forcings to improve the forecast of oil beaching in the aftermath of the Deepwater Horizon oil spill. The majority of these platforms addressed a narrow, specific type of emergency and harvested data from a particular outlet. We demonstrate an interactive framework for monitoring, mining and analyzing a plethora of heterogeneous social media sources for a diverse range of extreme events. Our framework consists of three major parts: a real time social media aggregator, a data processing and analysis engine, and a web-based visualization and reporting tool. The aggregator gathers tweets, Facebook comments from fan pages, Google+ posts, forum discussions, blog posts (such as LiveJournal and Blogger.com), images from photo-sharing platforms (such as Flickr, Picasa), videos from video-sharing platforms (youtube, Vimeo), and so forth. The data processing and analysis engine pre-processes the aggregated information and annotates it with geolocation and sentiment information. In many cases, the metadata of the social media posts does not contain geolocation information—-however, a human reader can easily guess from the body of the text what location is discussed. We are automating this task by use of Named Entity Recognition (NER) algorithms and a gazetteer service. The visualization and reporting tool provides a web-based, user-friendly interface that provides time-series analysis and plotting tools, geo-spacial visualization tools with interactive maps, and cause-effect inference tools. We demonstrate how we address big data challenges of monitoring, aggregating and analyzing vast amounts of social media data at a near realtime. As a result, our framework not only allows emergency responders to augment their situational awareness with social media information, but can also allow them to extract geophysical data and incorporate it into their analysis models.
[Driving ability and fitness to drive in people with diabetes mellitus].
Seeger, Rolf; Lehmann, Roger
2011-05-01
Chronic sequelae of diabetes that could potentially affect driving include the following: visual retinopathy with associated impaired visual acuity, loss of peripheral vision and poor dark adaptation; neuropathy that may affect lower limb functions needed for safe driving; and acute events, including transient cognitive dysfunction and loss of consciousness related to hypo- or hyperglycemia. Hyperglycemia does not suddenly incapacitate drivers, however its occurrence often leads to tiredness, blurred vision, decreased visual acuity and adjustment of treatment which may precipitate hypoglycaemia. The side effects of acute hypoglycemia are of particular concern, as they include slowing of both cognitive and motor functions. Hypoglycemia while driving ist the most important complication in persons treated with insulin, sulfonylureas or glinides. They can be prevented, however, by frequent measuring blood glucose before and every 60 to 90 minutes during driving, by keeping sugary snacks (carbohydrates) in the vehicle, and by taking carbohydrates in case of glucose levels below 5 mmol/l. For patients, who are treated with insulin and sulfonylureas/glinides, it is of utmost importance fort the treating physician to frequently talk about successful strategies for preventing hypoglycemias, and thus accidents, while driving. People with diabetes treated with insulin, sulfonylureas or glinides are nor allowed to drive a bus, taxi, or truck (commercial driving). Under special circumstances (evalution and treatment by a diabetologist/endocrinolgist, avoidance of hypoglycemias for three months, and frequent glucose measurements) an exception to this rule can be granted for truck and cab drivers (after a thorough licensing examination).
Brain representations for acquiring and recalling visual-motor adaptations
Bédard, Patrick; Sanes, Jerome N.
2014-01-01
Humans readily learn and remember new motor skills, a process that likely underlies adaptation to changing environments. During adaptation, the brain develops new sensory-motor relationships, and if consolidation occurs, a memory of the adaptation can be retained for extended periods. Considerable evidence exists that multiple brain circuits participate in acquiring new sensory-motor memories, though the networks engaged in recalling these and whether the same brain circuits participate in their formation and recall has less clarity. To address these issues, we assessed brain activation with functional MRI while young healthy adults learned and recalled new sensory-motor skills by adapting to world-view rotations of visual feedback that guided hand movements. We found cerebellar activation related to adaptation rate, likely reflecting changes related to overall adjustments to the visual rotation. A set of parietal and frontal regions, including inferior and superior parietal lobules, premotor area, supplementary motor area and primary somatosensory cortex, exhibited non-linear learning-related activation that peaked in the middle of the adaptation phase. Activation in some of these areas, including the inferior parietal lobule, intra-parietal sulcus and somatosensory cortex, likely reflected actual learning, since the activation correlated with learning after-effects. Lastly, we identified several structures having recall-related activation, including the anterior cingulate and the posterior putamen, since the activation correlated with recall efficacy. These findings demonstrate dynamic aspects of brain activation patterns related to formation and recall of a sensory-motor skill, such that non-overlapping brain regions participate in distinctive behavioral events. PMID:25019676
New data products available at the IRIS DMC
NASA Astrophysics Data System (ADS)
Trabant, C. M.; Bahavar, M.; Hutko, A.; Karstens, R.
2010-12-01
The research supported by the raw data from the observatories of NSF's EarthScope project are having tremendous impact on our understanding of the structure and geologic history of North America, how and why earthquakes occur and many other areas of modern geophysics. The IRIS Data Management Center (DMC) is the primary access point for EarthScope/USArray data and has embarked on a new effort to produce higher-level data products beyond raw time series in order to assist the community in extracting the highest value possible from these data. These new products will serve many purposes: stepping-stones for future research projects, data visualizations, research result comparisons and compilation of unique data sets as well as outreach material. To ensure community involvement in the development of new products the requirements and priorities are reviewed and approved by the IRIS Data Products Working Group (DPWG). Many new products are now available at the IRIS DMC. These include two event based products generated in near real time. 1) USArray Ground Motion Visualizations, routinely generated animations showing the both the vertical and horizontal seismic wavefields sweeping across the USArray Transportable Array from earthquakes around the world. 2) Event Plots, a suite of figures automatically generated following all M6.0+ events which include phase aligned record sections, global body wave envelope stacks, regional network vespagrams and source-time functions. 3) Earth Model Collaboration, a new web repository for community-supplied regional and global tomography models with the ability to preview, request and compare models. 4) EARS, the EarthScope Automated Receiver Survey, developed at the University of South Carolina, aims to calculate crustal thickness and bulk crustal properties beneath USArray stations as well as many other broadband stations whose data are archived at the IRIS DMC. 5) Archiving and distribution of Princeton 3D SEM and 1D synthetic seismograms generated for all Global CMT events. 6) Archiving and distribution of GPS displacement time series produced by the Plate Boundary Observatory. Other data products are under consideration and will be moved to the development pipeline once approved by the IRIS DPWG. Feedback on existing products and ideas for new products are welcome at any time.
Yang, Lixia; Xia, Chunmei; Mu, Yuming; Guan, Lina; Wang, Chunmei; Tang, Qi; Verocai, Flavia Gomes; Fonseca, Lea Mirian Barbosa da; Shih, Ming Chi
2016-03-01
Real time myocardial contrast echocardiography (RTMCE) is a cost-effective and simple method to quantify coronary flow reserve (CFR). We aimed to determine the value of RTMCE to predict cardiac events after percutaneous coronary intervention (PCI). We have studied myocardial blood volume (A), velocity (β), flow indexes (MBF, A × β), and vasodilator reserve (stress-to-rest ratios) in 36 patients with acute coronary syndrome (ACS) who underwent PCI. CFR (MBF at stress/MBF at rest) was calculated for each patient. Perfusion scores were used for visual interpretation by MCE and correlation with TIMI flow grade. In qualitative RTMCE assessment, post-PCI visual perfusion scores were higher than pre-PCI (Z = -7.26, P < 0.01). Among 271 arteries with TIMI flow grade 3 post-PCI, 72 (36%) did not reach visual perfusion score 1. The β- and A × β-reserve of the abnormal segments supplied by obstructed arteries increased after PCI comparing to pre-PCI values (P < 0.01). Patients with adverse cardiac events had significantly lower β- and lower A × β-reserve than patients without adverse cardiac events. In the former group, the CFR was ≥ 1.5 both pre- and post-PCI. CFR estimation by RTMCE can quantify myocardial perfusion in patients with ACS who underwent PCI. The parameters β-reserve and CFR combined might predict cardiac events on the follow-up. © 2015, Wiley Periodicals, Inc.
The sense of agency is action-effect causality perception based on cross-modal grouping.
Kawabe, Takahiro; Roseboom, Warrick; Nishida, Shin'ya
2013-07-22
Sense of agency, the experience of controlling external events through one's actions, stems from contiguity between action- and effect-related signals. Here we show that human observers link their action- and effect-related signals using a computational principle common to cross-modal sensory grouping. We first report that the detection of a delay between tactile and visual stimuli is enhanced when both stimuli are synchronized with separate auditory stimuli (experiment 1). This occurs because the synchronized auditory stimuli hinder the potential grouping between tactile and visual stimuli. We subsequently demonstrate an analogous effect on observers' key press as an action and a sensory event. This change is associated with a modulation in sense of agency; namely, sense of agency, as evaluated by apparent compressions of action-effect intervals (intentional binding) or subjective causality ratings, is impaired when both participant's action and its putative visual effect events are synchronized with auditory tones (experiments 2 and 3). Moreover, a similar role of action-effect grouping in determining sense of agency is demonstrated when the additional signal is presented in the modality identical to an effect event (experiment 4). These results are consistent with the view that sense of agency is the result of general processes of causal perception and that cross-modal grouping plays a central role in these processes.
The sense of agency is action–effect causality perception based on cross-modal grouping
Kawabe, Takahiro; Roseboom, Warrick; Nishida, Shin'ya
2013-01-01
Sense of agency, the experience of controlling external events through one's actions, stems from contiguity between action- and effect-related signals. Here we show that human observers link their action- and effect-related signals using a computational principle common to cross-modal sensory grouping. We first report that the detection of a delay between tactile and visual stimuli is enhanced when both stimuli are synchronized with separate auditory stimuli (experiment 1). This occurs because the synchronized auditory stimuli hinder the potential grouping between tactile and visual stimuli. We subsequently demonstrate an analogous effect on observers' key press as an action and a sensory event. This change is associated with a modulation in sense of agency; namely, sense of agency, as evaluated by apparent compressions of action–effect intervals (intentional binding) or subjective causality ratings, is impaired when both participant's action and its putative visual effect events are synchronized with auditory tones (experiments 2 and 3). Moreover, a similar role of action–effect grouping in determining sense of agency is demonstrated when the additional signal is presented in the modality identical to an effect event (experiment 4). These results are consistent with the view that sense of agency is the result of general processes of causal perception and that cross-modal grouping plays a central role in these processes. PMID:23740784
Disaster Response Modeling Through Discrete-Event Simulation
NASA Technical Reports Server (NTRS)
Wang, Jeffrey; Gilmer, Graham
2012-01-01
Organizations today are required to plan against a rapidly changing, high-cost environment. This is especially true for first responders to disasters and other incidents, where critical decisions must be made in a timely manner to save lives and resources. Discrete-event simulations enable organizations to make better decisions by visualizing complex processes and the impact of proposed changes before they are implemented. A discrete-event simulation using Simio software has been developed to effectively analyze and quantify the imagery capabilities of domestic aviation resources conducting relief missions. This approach has helped synthesize large amounts of data to better visualize process flows, manage resources, and pinpoint capability gaps and shortfalls in disaster response scenarios. Simulation outputs and results have supported decision makers in the understanding of high risk locations, key resource placement, and the effectiveness of proposed improvements.
Cycowicz, Yael M; Friedman, David
2007-01-01
The orienting response, the brain's reaction to novel and/or out of context familiar events, is reflected by the novelty P3 of the ERP. Contextually novel events also engender high rates of recognition memory. We examined, under incidental and intentional conditions, the effects of visual symbol familiarity on the novelty P3 recorded during an oddball task and on the parietal episodic memory (EM) effect, an index of recollection. Repetition of familiar, but not unfamiliar, symbols elicited a reduction in the novelty P3. Better recognition performance for the familiar symbols was associated with a robust parietal EM effect, which was absent for the unfamiliar symbols in the incidental task. These data demonstrate that processing of novel events depends on expectation and whether stimuli have preexisting representations in long-term semantic memory.
Using a Cyclical Diagram to Visualize the Events of the Ovulatory Menstrual Cycle
ERIC Educational Resources Information Center
Ho, Ivan Shun; Parmar, Navneet K.
2014-01-01
Over the past 10 years, college textbooks in human anatomy and physiology have typically presented the events of the ovulatory menstrual cycle in a linear format, with time in days shown on the x-axis, and hormone levels, follicular development, and uterine lining on the y-axis. In addition, the various events are often shown over a 28-day cycle,…
Nagata, Takashi; Kimura, Yoshinari; Ishii, Masami
2012-04-01
The Great East Japan Earthquake occurred on March 11, 2011. In the first 10 days after the event, information about radiation risks from the Fukushima Daiichi nuclear plant was unavailable, and the disaster response, including deployment of disaster teams, was delayed. Beginning on March 17, 2011, the Japan Medical Association used a geographic information system (GIS) to visualize the risk of radiation exposure in Fukushima. This information facilitated the decision to deploy disaster medical response teams on March 18, 2011.
Trace-Driven Debugging of Message Passing Programs
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Hood, Robert; Lopez, Louis; Bailey, David (Technical Monitor)
1998-01-01
In this paper we report on features added to a parallel debugger to simplify the debugging of parallel message passing programs. These features include replay, setting consistent breakpoints based on interprocess event causality, a parallel undo operation, and communication supervision. These features all use trace information collected during the execution of the program being debugged. We used a number of different instrumentation techniques to collect traces. We also implemented trace displays using two different trace visualization systems. The implementation was tested on an SGI Power Challenge cluster and a network of SGI workstations.
Next generation data harmonization
NASA Astrophysics Data System (ADS)
Armstrong, Chandler; Brown, Ryan M.; Chaves, Jillian; Czerniejewski, Adam; Del Vecchio, Justin; Perkins, Timothy K.; Rudnicki, Ron; Tauer, Greg
2015-05-01
Analysts are presented with a never ending stream of data sources. Often, subsets of data sources to solve problems are easily identified but the process to align data sets is time consuming. However, many semantic technologies do allow for fast harmonization of data to overcome these problems. These include ontologies that serve as alignment targets, visual tools and natural language processing that generate semantic graphs in terms of the ontologies, and analytics that leverage these graphs. This research reviews a developed prototype that employs all these approaches to perform analysis across disparate data sources documenting violent, extremist events.
Aging and Visual Function of Military Pilots: A Review
1982-08-01
of the Soc. for Inf. Disp. 21:219- 227. 24. Ginsburg. A. P .. M. W. Cannon, R. Sekuler, D . Evans, C . Owsley, and P ... the Institute of Medicine. This work relates to Department of Navy Contract N0OOI48O- C - 0159 issued by the Office of Naval Research under Contract...loss with age in the temporal resolving power of the visual system. Temporally con- tiguous visual events that would be seen as separate
Visual Sensing for Urban Flood Monitoring
Lo, Shi-Wei; Wu, Jyh-Horng; Lin, Fang-Pang; Hsu, Ching-Han
2015-01-01
With the increasing climatic extremes, the frequency and severity of urban flood events have intensified worldwide. In this study, image-based automated monitoring of flood formation and analyses of water level fluctuation were proposed as value-added intelligent sensing applications to turn a passive monitoring camera into a visual sensor. Combined with the proposed visual sensing method, traditional hydrological monitoring cameras have the ability to sense and analyze the local situation of flood events. This can solve the current problem that image-based flood monitoring heavily relies on continuous manned monitoring. Conventional sensing networks can only offer one-dimensional physical parameters measured by gauge sensors, whereas visual sensors can acquire dynamic image information of monitored sites and provide disaster prevention agencies with actual field information for decision-making to relieve flood hazards. The visual sensing method established in this study provides spatiotemporal information that can be used for automated remote analysis for monitoring urban floods. This paper focuses on the determination of flood formation based on image-processing techniques. The experimental results suggest that the visual sensing approach may be a reliable way for determining the water fluctuation and measuring its elevation and flood intrusion with respect to real-world coordinates. The performance of the proposed method has been confirmed; it has the capability to monitor and analyze the flood status, and therefore, it can serve as an active flood warning system. PMID:26287201
A Tool for the Analysis of Motion Picture Film or Video Tape.
ERIC Educational Resources Information Center
Ekman, Paul; Friesen, Wallace V.
1969-01-01
A visual information display and retrieval system (VID-R) is described for application to visual records. VID-R searches and retrieves events by time address (location) or by previously stored ovservations or measurements. Fields are labeled by writing discriminable binary addresses on the horizontal lines outside the normal viewing area. The…
ERIC Educational Resources Information Center
Library of Congress, Washington, DC. National Library Service for the Blind and Physically Handicapped.
This resource guide lists and describes print materials, nonprint materials, and organizations dealing with sports, outdoor recreation, and games for visually and physically impaired individuals. Section I focuses on national organizations that sponsor athletic events on various levels and provide related services for children, youth, and adults…
Using "Chromosomal Socks" to Demonstrate Ploidy in Mitosis and Meiosis
ERIC Educational Resources Information Center
Chinnici, Joseph P.; Neth, Somalin Zaroh; Sherman, Leah R.
2006-01-01
Today, many biology instructors use visual models to help students understand abstract concepts like cell division. For all biology instructors, dealing with student misconceptions of cell division may seem hopeless at times--even after using visual models. Although student errors in cell division are built around the three key events of cell…
Context and Occasion Setting in "Drosophila" Visual Learning
ERIC Educational Resources Information Center
Brembs, Bjorn; Wiener, Jan
2006-01-01
In a permanently changing environment, it is by no means an easy task to distinguish potentially important events from negligible ones. Yet, to survive, every animal has to continuously face that challenge. How does the brain accomplish this feat? Building on previous work in "Drosophila melanogaster" visual learning, we have developed an…
Eye Movements Reveal the Dynamic Simulation of Speed in Language
ERIC Educational Resources Information Center
Speed, Laura J.; Vigliocco, Gabriella
2014-01-01
This study investigates how speed of motion is processed in language. In three eye-tracking experiments, participants were presented with visual scenes and spoken sentences describing fast or slow events (e.g., "The lion ambled/dashed to the balloon"). Results showed that looking time to relevant objects in the visual scene was affected…
On the Electrophysiological Evidence for the Capture of Visual Attention
ERIC Educational Resources Information Center
McDonald, John J.; Green, Jessica J.; Jannati, Ali; Di Lollo, Vincent
2013-01-01
The presence of a salient distractor interferes with visual search. According to the salience-driven selection hypothesis, this interference is because of an initial deployment of attention to the distractor. Three event-related potential (ERP) findings have been regarded as evidence for this hypothesis: (a) salient distractors were found to…
Visualization of Sedentary Behavior Using an Event-Based Approach
ERIC Educational Resources Information Center
Loudon, David; Granat, Malcolm H.
2015-01-01
Visualization is commonly used in the interpretation of physical behavior (PB) data, either in conjunction with or as precursor to formal analysis. Effective representations of the data can enable the identification of patterns of behavior, and how they relate to the temporal context in a single day, or across multiple days. An understanding of…
Sundvall, Erik; Nyström, Mikael; Forss, Mattias; Chen, Rong; Petersson, Håkan; Ahlfeldt, Hans
2007-01-01
This paper describes selected earlier approaches to graphically relating events to each other and to time; some new combinations are also suggested. These are then combined into a unified prototyping environment for visualization and navigation of electronic health records. Google Earth (GE) is used for handling display and interaction of clinical information stored using openEHR data structures and 'archetypes'. The strength of the approach comes from GE's sophisticated handling of detail levels, from coarse overviews to fine-grained details that has been combined with linear, polar and region-based views of clinical events related to time. The system should be easy to learn since all the visualization styles can use the same navigation. The structured and multifaceted approach to handling time that is possible with archetyped openEHR data lends itself well to visualizing and integration with openEHR components is provided in the environment.
Smid, H G; Jakob, A; Heinze, H J
1999-03-01
What cognitive processes underlie event-related brain potential (ERP) effects related to visual multidimensional selective attention and how are these processes organized? We recorded ERPs when participants attended to one conjunction of color, global shape and local shape and ignored other conjunctions of these attributes in three discriminability conditions. Attending to color and shape produced three ERP effects: frontal selection positivity (FSP), central negativity (N2b), and posterior selection negativity (SN). The results suggested that the processes underlying SN and N2b perform independent within-dimension selections, whereas the process underlying the FSP performs hierarchical between-dimension selections. At posterior electrodes, manipulation of discriminability changed the ERPs to the relevant but not to the irrelevant stimuli, suggesting that the SN does not concern the selection process itself but rather a cognitive process initiated after selection is finished. Other findings suggested that selection of multiple visual attributes occurs in parallel.
Ince, Robin A A; Jaworska, Katarzyna; Gross, Joachim; Panzeri, Stefano; van Rijsbergen, Nicola J; Rousselet, Guillaume A; Schyns, Philippe G
2016-08-22
A key to understanding visual cognition is to determine "where", "when", and "how" brain responses reflect the processing of the specific visual features that modulate categorization behavior-the "what". The N170 is the earliest Event-Related Potential (ERP) that preferentially responds to faces. Here, we demonstrate that a paradigmatic shift is necessary to interpret the N170 as the product of an information processing network that dynamically codes and transfers face features across hemispheres, rather than as a local stimulus-driven event. Reverse-correlation methods coupled with information-theoretic analyses revealed that visibility of the eyes influences face detection behavior. The N170 initially reflects coding of the behaviorally relevant eye contralateral to the sensor, followed by a causal communication of the other eye from the other hemisphere. These findings demonstrate that the deceptively simple N170 ERP hides a complex network information processing mechanism involving initial coding and subsequent cross-hemispheric transfer of visual features. © The Author 2016. Published by Oxford University Press.
ActiviTree: interactive visual exploration of sequences in event-based data using graph similarity.
Vrotsou, Katerina; Johansson, Jimmy; Cooper, Matthew
2009-01-01
The identification of significant sequences in large and complex event-based temporal data is a challenging problem with applications in many areas of today's information intensive society. Pure visual representations can be used for the analysis, but are constrained to small data sets. Algorithmic search mechanisms used for larger data sets become expensive as the data size increases and typically focus on frequency of occurrence to reduce the computational complexity, often overlooking important infrequent sequences and outliers. In this paper we introduce an interactive visual data mining approach based on an adaptation of techniques developed for web searching, combined with an intuitive visual interface, to facilitate user-centred exploration of the data and identification of sequences significant to that user. The search algorithm used in the exploration executes in negligible time, even for large data, and so no pre-processing of the selected data is required, making this a completely interactive experience for the user. Our particular application area is social science diary data but the technique is applicable across many other disciplines.
NASA Astrophysics Data System (ADS)
Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.
1989-03-01
The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.
Hillyard, S A; Vogel, E K; Luck, S J
1998-01-01
Both physiological and behavioral studies have suggested that stimulus-driven neural activity in the sensory pathways can be modulated in amplitude during selective attention. Recordings of event-related brain potentials indicate that such sensory gain control or amplification processes play an important role in visual-spatial attention. Combined event-related brain potential and neuroimaging experiments provide strong evidence that attentional gain control operates at an early stage of visual processing in extrastriate cortical areas. These data support early selection theories of attention and provide a basis for distinguishing between separate mechanisms of attentional suppression (of unattended inputs) and attentional facilitation (of attended inputs). PMID:9770220
Audiovisual Delay as a Novel Cue to Visual Distance.
Jaekl, Philip; Seidlitz, Jakob; Harris, Laurence R; Tadin, Duje
2015-01-01
For audiovisual sensory events, sound arrives with a delay relative to light that increases with event distance. It is unknown, however, whether humans can use these ubiquitous sound delays as an information source for distance computation. Here, we tested the hypothesis that audiovisual delays can both bias and improve human perceptual distance discrimination, such that visual stimuli paired with auditory delays are perceived as more distant and are thereby an ordinal distance cue. In two experiments, participants judged the relative distance of two repetitively displayed three-dimensional dot clusters, both presented with sounds of varying delays. In the first experiment, dot clusters presented with a sound delay were judged to be more distant than dot clusters paired with equivalent sound leads. In the second experiment, we confirmed that the presence of a sound delay was sufficient to cause stimuli to appear as more distant. Additionally, we found that ecologically congruent pairing of more distant events with a sound delay resulted in an increase in the precision of distance judgments. A control experiment determined that the sound delay duration influencing these distance judgments was not detectable, thereby eliminating decision-level influence. In sum, we present evidence that audiovisual delays can be an ordinal cue to visual distance.
NASA Astrophysics Data System (ADS)
Lloyd, S. A.; Acker, J. G.; Prados, A. I.; Leptoukh, G. G.
2008-12-01
One of the biggest obstacles for the average Earth science student today is locating and obtaining satellite- based remote sensing datasets in a format that is accessible and optimal for their data analysis needs. At the Goddard Earth Sciences Data and Information Services Center (GES-DISC) alone, on the order of hundreds of Terabytes of data are available for distribution to scientists, students and the general public. The single biggest and time-consuming hurdle for most students when they begin their study of the various datasets is how to slog through this mountain of data to arrive at a properly sub-setted and manageable dataset to answer their science question(s). The GES DISC provides a number of tools for data access and visualization, including the Google-like Mirador search engine and the powerful GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (Giovanni) web interface. Giovanni provides a simple way to visualize, analyze and access vast amounts of satellite-based Earth science data. Giovanni's features and practical examples of its use will be demonstrated, with an emphasis on how satellite remote sensing can help students understand recent events in the atmosphere and biosphere. Giovanni is actually a series of sixteen similar web-based data interfaces, each of which covers a single satellite dataset (such as TRMM, TOMS, OMI, AIRS, MLS, HALOE, etc.) or a group of related datasets (such as MODIS and MISR for aerosols, SeaWIFS and MODIS for ocean color, and the suite of A-Train observations co-located along the CloudSat orbital path). Recently, ground-based datasets have been included in Giovanni, including the Northern Eurasian Earth Science Partnership Initiative (NEESPI), and EPA fine particulate matter (PM2.5) for air quality. Model data such as the Goddard GOCART model and MERRA meteorological reanalyses (in process) are being increasingly incorporated into Giovanni to facilitate model- data intercomparison. A full suite of data analysis and visualization tools is also available within Giovanni. The GES DISC is currently developing a systematic series of training modules for Earth science satellite data, associated with our development of additional datasets and data visualization tools for Giovanni. Training sessions will include an overview of the Earth science datasets archived at Goddard, an overview of terms and techniques associated with satellite remote sensing, dataset-specific issues, an overview of Giovanni functionality, and a series of examples of how data can be readily accessed and visualized.
Visual Culture and Electronic Government: Exploring a New Generation of E-Government
NASA Astrophysics Data System (ADS)
Bekkers, Victor; Moody, Rebecca
E-government is becoming more picture-oriented. What meaning do stakeholders attach to visual events and visualization? Comparative case study research show the functional meaning primarily refers to registration, integration, transparency and communication. The political meaning refers to new ways of framing in order to secure specific interests and claims. To what the institutional meaning relates is ambiguous: either it improves the position of citizens, or it reinforces the existing bias presented by governments. Hence, we expect that the emergence of a visualized public space, through omnipresent penetration of (mobile) multimedia technologies, will influence government-citizen interactions.
Helioviewer.org: An Open-source Tool for Visualizing Solar Data
NASA Astrophysics Data System (ADS)
Hughitt, V. Keith; Ireland, J.; Schmiedel, P.; Dimitoglou, G.; Mueller, D.; Fleck, B.
2009-05-01
As the amount of solar data available to scientists continues to increase at faster and faster rates, it is important that there exist simple tools for navigating this data quickly with a minimal amount of effort. By combining heterogeneous solar physics datatypes such as full-disk images and coronagraphs, along with feature and event information, Helioviewer offers a simple and intuitive way to browse multiple datasets simultaneously. Images are stored in a repository using the JPEG 2000 format and tiled dynamically upon a client's request. By tiling images and serving only the portions of the image requested, it is possible for the client to work with very large images without having to fetch all of the data at once. Currently, Helioviewer enables users to browse the entire SOHO data archive, updated hourly, as well as data feature/event catalog data from eight different catalogs including active region, flare, coronal mass ejection, type II radio burst data. In addition to a focus on intercommunication with other virtual observatories and browsers (VSO, HEK, etc), Helioviewer will offer a number of externally-available application programming interfaces (APIs) to enable easy third party use, adoption and extension. Future functionality will include: support for additional data-sources including TRACE, SDO and STEREO, dynamic movie generation, a navigable timeline of recorded solar events, social annotation, and basic client-side image processing.
Association of vascular fluoride uptake with vascular calcification and coronary artery disease.
Li, Yuxin; Berenji, Gholam R; Shaba, Wisam F; Tafti, Bashir; Yevdayev, Ella; Dadparvar, Simin
2012-01-01
The feasibility of a fluoride positron emission tomography/computed tomography (PET/CT) scan for imaging atherosclerosis has not been well documented. The purpose of this study was to assess fluoride uptake of vascular calcification in various major arteries, including coronary arteries. We retrospectively reviewed the imaging data and cardiovascular history of 61 patients who received whole-body sodium [¹⁸F]fluoride PET/CT studies at our institution from 2009 to 2010. Fluoride uptake and calcification in major arteries, including coronary arteries, were analyzed by both visual assessment and standardized uptake value measurement. Fluoride uptake in vascular walls was demonstrated in 361 sites of 54 (96%) patients, whereas calcification was observed in 317 sites of 49 (88%) patients. Significant correlation between fluoride uptake and calcification was observed in most of the arterial walls, except in those of the abdominal aorta. Fluoride uptake in coronary arteries was demonstrated in 28 (46%) patients and coronary calcifications were observed in 34 (56%) patients. There was significant correlation between history of cardiovascular events and presence of fluoride uptake in coronary arteries. The coronary fluoride uptake value in patients with cardiovascular events was significantly higher than in patients without cardiovascular events. sodium [¹⁸F]fluoride PET/CT might be useful in the evaluation of the atherosclerotic process in major arteries, including coronary arteries. An increased fluoride uptake in coronary arteries may be associated with an increased cardiovascular risk.
Data Discovery and Access via the Heliophysics Events Knowledgebase (HEK)
NASA Astrophysics Data System (ADS)
Somani, A.; Hurlburt, N. E.; Schrijver, C. J.; Cheung, M.; Freeland, S.; Slater, G. L.; Seguin, R.; Timmons, R.; Green, S.; Chang, L.; Kobashi, A.; Jaffey, A.
2011-12-01
The HEK is a integrated system which helps direct scientists to solar events and data from a variety of providers. The system is fully operational and adoption of HEK has been growing since the launch of NASA's SDO mission. In this presentation we describe the different components that comprise HEK. The Heliophysics Events Registry (HER) and Heliophysics Coverage Registry (HCR) form the two major databases behind the system. The HCR allows the user to search on coverage event metadata for a variety of instruments. The HER allows the user to search on annotated event metadata for a variety of instruments. Both the HCR and HER are accessible via a web API which can return search results in machine readable formats (e.g. XML and JSON). A variety of SolarSoft services are also provided to allow users to search the HEK as well as obtain and manipulate data. Other components include - the Event Detection System (EDS) continually runs feature finding algorithms on SDO data to populate the HER with relevant events, - A web form for users to request SDO data cutouts for multiple AIA channels as well as HMI line-of-sight magnetograms, - iSolSearch, which allows a user to browse events in the HER and search for specific events over a specific time interval, all within a graphical web page, - Panorama, which is the software tool used for rapid visualization of large volumes of solar image data in multiple channels/wavelengths. The user can also easily create WYSIWYG movies and launch the Annotator tool to describe events and features. - EVACS, which provides a JOGL powered client for the HER and HCR. EVACS displays the searched for events on a full disk magnetogram of the sun while displaying more detailed information for events.
Art in Science Promoting Interest in Research and Exploration (ASPIRE)
NASA Astrophysics Data System (ADS)
Fillingim, M.; Zevin, D.; Thrall, L.; Croft, S.; Raftery, C.; Shackelford, R.
2015-11-01
Led by U.C. Berkeley's Center for Science Education at the Space Sciences Laboratory in partnership with U.C. Berkeley Astronomy, the Lawrence Hall of Science, and the YMCA of the Central Bay Area, Art in Science Promoting Interest in Research and Exploration (ASPIRE) is a NASA EPOESS-funded program mainly for high school students that explores NASA science through art and highlights the need for and uses of art and visualizations in science. ASPIRE's aim is to motivate more diverse young people (especially African Americans) to learn about Science, Technology, Engineering, and Mathematics (STEM) topics and careers, via 1) Intensive summer workshops; 2) Drop-in after school workshops; 3) Astronomy visualization-focused outreach programming at public venues including a series of free star parties where the students help run the events; and 5) A website and a number of social networking strategies that highlight our youth's artwork.
Stasulli, Nikolas M; Shank, Elizabeth A
2016-11-01
The ability of microbes to secrete bioactive chemical signals into their environment has been known for over a century. However, it is only in the last decade that imaging mass spectrometry has provided us with the ability to directly visualize the spatial distributions of these microbial metabolites. This technology involves collecting mass spectra from multiple discrete locations across a biological sample, yielding chemical ‘maps’ that simultaneously reveal the distributions of hundreds of metabolites in two dimensions. Advances in microbial imaging mass spectrometry summarized here have included the identification of novel strain- or coculture-specific compounds, the visualization of biotransformation events (where one metabolite is converted into another by a neighboring microbe), and the implementation of a method to reconstruct the 3D subsurface distributions of metabolites, among others. Here we review the recent literature and discuss how imaging mass spectrometry has spurred novel insights regarding the chemical consequences of microbial interactions.
Multiscale Simulation of Blood Flow in Brain Arteries with an Aneurysm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leopold Grinberg; Vitali Morozov; Dmitry A. Fedosov
2013-04-24
Multi-scale modeling of arterial blood flow can shed light on the interaction between events happening at micro- and meso-scales (i.e., adhesion of red blood cells to the arterial wall, clot formation) and at macro-scales (i.e., change in flow patterns due to the clot). Coupled numerical simulations of such multi-scale flow require state-of-the-art computers and algorithms, along with techniques for multi-scale visualizations.This animation presents results of studies used in the development of a multi-scale visualization methodology. First we use streamlines to show the path the flow is taking as it moves through the system, including the aneurysm. Next we investigate themore » process of thrombus (blood clot) formation, which may be responsible for the rupture of aneurysms, by concentrating on the platelet blood cells, observing as they aggregate on the wall of the aneurysm.« less
Extensive video-game experience alters cortical networks for complex visuomotor transformations.
Granek, Joshua A; Gorbet, Diana J; Sergio, Lauren E
2010-10-01
Using event-related functional magnetic resonance imaging (fMRI), we examined the effect of video-game experience on the neural control of increasingly complex visuomotor tasks. Previously, skilled individuals have demonstrated the use of a more efficient movement control brain network, including the prefrontal, premotor, primary sensorimotor and parietal cortices. Our results extend and generalize this finding by documenting additional prefrontal cortex activity in experienced video gamers planning for complex eye-hand coordination tasks that are distinct from actual video-game play. These changes in activation between non-gamers and extensive gamers are putatively related to the increased online control and spatial attention required for complex visually guided reaching. These data suggest that the basic cortical network for processing complex visually guided reaching is altered by extensive video-game play. Crown Copyright © 2009. Published by Elsevier Srl. All rights reserved.
Hu, Peter F; Xiao, Yan; Ho, Danny; Mackenzie, Colin F; Hu, Hao; Voigt, Roger; Martz, Douglas
2006-06-01
One of the major challenges for day-of-surgery operating room coordination is accurate and timely situation awareness. Distributed and secure real-time status information is key to addressing these challenges. This article reports on the design and implementation of a passive status monitoring system in a 19-room surgical suite of a major academic medical center. Key design requirements considered included integrated real-time operating room status display, access control, security, and network impact. The system used live operating room video images and patient vital signs obtained through monitors to automatically update events and operating room status. Images were presented on a "need-to-know" basis, and access was controlled by identification badge authorization. The system delivered reliable real-time operating room images and status with acceptable network impact. Operating room status was visualized at 4 separate locations and was used continuously by clinicians and operating room service providers to coordinate operating room activities.
Looking inward and back: Real-time monitoring of visual working memories.
Suchow, Jordan W; Fougnie, Daryl; Alvarez, George A
2017-04-01
Confidence in our memories is influenced by many factors, including beliefs about the perceptibility or memorability of certain kinds of objects and events, as well as knowledge about our skill sets, habits, and experiences. Notoriously, our knowledge and beliefs about memory can lead us astray, causing us to be overly confident in eyewitness testimony or to overestimate the frequency of recent experiences. Here, using visual working memory as a case study, we stripped away all these potentially misleading cues, requiring observers to make confidence judgments by directly assessing the quality of their memory representations. We show that individuals can monitor the status of information in working memory as it degrades over time. Our findings suggest that people have access to information reflecting the existence and quality of their working memories, and furthermore, that they can use this information to guide their behavior. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klein, Steven Karl; Day, Christy M.; Determan, John C.
LANL has developed a process to generate a progressive family of system models for a fissile solution system. This family includes a dynamic system simulation comprised of coupled nonlinear differential equations describing the time evolution of the system. Neutron kinetics, radiolytic gas generation and transport, and core thermal hydraulics are included in the DSS. Extensions to explicit operation of cooling loops and radiolytic gas handling are embedded in these systems as is a stability model. The DSS may then be converted to an implementation in Visual Studio to provide a design team the ability to rapidly estimate system performance impactsmore » from a variety of design decisions. This provides a method to assist in optimization of the system design. Once design has been generated in some detail the C++ version of the system model may then be implemented in a LabVIEW user interface to evaluate operator controls and instrumentation and operator recognition and response to off-normal events. Taken as a set of system models the DSS, Visual Studio, and LabVIEW progression provides a comprehensive set of design support tools.« less
Computer system evolution requirements for autonomous checkout of exploration vehicles
NASA Technical Reports Server (NTRS)
Davis, Tom; Sklar, Mike
1991-01-01
This study, now in its third year, has had the overall objective and challenge of determining the needed hooks and scars in the initial Space Station Freedom (SSF) system to assure that on-orbit assembly and refurbishment of lunar and Mars spacecraft can be accomplished with the maximum use of automation. In this study automation is all encompassing and includes physical tasks such as parts mating, tool operation, and human visual inspection, as well as non-physical tasks such as monitoring and diagnosis, planning and scheduling, and autonomous visual inspection. Potential tasks for automation include both extravehicular activity (EVA) and intravehicular activity (IVA) events. A number of specific techniques and tools have been developed to determine the ideal tasks to be automated, and the resulting timelines, changes in labor requirements and resources required. The Mars/Phobos exploratory mission developed in FY89, and the Lunar Assembly/Refurbishment mission developed in FY90 and depicted in the 90 Day Study as Option 5, have been analyzed in detailed in recent years. The complete methodology and results are presented in FY89 and FY90 final reports.
NASA Astrophysics Data System (ADS)
Rimland, Jeffrey; Ballora, Mark; Shumaker, Wade
2013-05-01
As the sheer volume of data grows exponentially, it becomes increasingly difficult for existing visualization techniques to keep pace. The sonification field attempts to address this issue by enlisting our auditory senses to detect anomalies or complex events that are difficult to detect via visualization alone. Storification attempts to improve analyst understanding by converting data streams into organized narratives describing the data at a higher level of abstraction than the input stream that they area derived from. While these techniques hold a great deal of promise, they also each have a unique set of challenges that must be overcome. Sonification techniques must represent a broad variety of distributed heterogeneous data and present it to the analyst/listener in a manner that doesn't require extended listening - as visual "snapshots" are useful but auditory sounds only exist over time. Storification still faces many human-computer interface (HCI) challenges as well as technical hurdles related to automatically generating a logical narrative from lower-level data streams. This paper proposes a novel approach that utilizes a service oriented architecture (SOA)-based hybrid visualization/ sonification / storification framework to enable distributed human-in-the-loop processing of data in a manner that makes optimized usage of both visual and auditory processing pathways while also leveraging the value of narrative explication of data streams. It addresses the benefits and shortcomings of each processing modality and discusses information infrastructure and data representation concerns required with their utilization in a distributed environment. We present a generalizable approach with a broad range of applications including cyber security, medical informatics, facilitation of energy savings in "smart" buildings, and detection of natural and man-made disasters.
Functional vision loss: a diagnosis of exclusion.
Villegas, Rex B; Ilsen, Pauline F
2007-10-01
Most cases of visual acuity or visual field loss can be attributed to ocular pathology or ocular manifestations of systemic pathology. They can also occasionally be attributed to nonpathologic processes or malingering. Functional vision loss is any decrease in vision the origin of which cannot be attributed to a pathologic or structural abnormality. Two cases of functional vision loss are described. In the first, a 58-year-old man presented for a baseline eye examination for enrollment in a vision rehabilitation program. He reported bilateral blindness since a motor vehicle accident with head trauma 4 years prior. Entering visual acuity was "no light perception" in each eye. Ocular health examination was normal and the patient made frequent eye contact with the examiners. He was referred for neuroimaging and electrophysiologic testing. The second case was a 49-year-old man who presented with a long history of intermittent monocular diplopia. His medical history was significant for psycho-medical evaluations and a diagnosis of factitious disorder. Entering uncorrected visual acuities were 20/20 in each eye, but visual field testing found constriction. No abnormalities were found that could account for the monocular diplopia or visual field deficit. A diagnosis of functional vision loss secondary to factitious disorder was made. Functional vision loss is a diagnosis of exclusion. In the event of reduced vision in the context of a normal ocular health examination, all other pathology must be ruled out before making the diagnosis of functional vision loss. Evaluation must include auxiliary ophthalmologic testing, neuroimaging of the visual pathway, review of the medical history and lifestyle, and psychiatric evaluation. Comanagement with a psychiatrist is essential for patients with functional vision loss.
Wired Widgets: Agile Visualization for Space Situational Awareness
NASA Astrophysics Data System (ADS)
Gerschefske, K.; Witmer, J.
2012-09-01
Continued advancement in sensors and analysis techniques have resulted in a wealth of Space Situational Awareness (SSA) data, made available via tools and Service Oriented Architectures (SOA) such as those in the Joint Space Operations Center Mission Systems (JMS) environment. Current visualization software cannot quickly adapt to rapidly changing missions and data, preventing operators and analysts from performing their jobs effectively. The value of this wealth of SSA data is not fully realized, as the operators' existing software is not built with the flexibility to consume new or changing sources of data or to rapidly customize their visualization as the mission evolves. While tools like the JMS user-defined operational picture (UDOP) have begun to fill this gap, this paper presents a further evolution, leveraging Web 2.0 technologies for maximum agility. We demonstrate a flexible Web widget framework with inter-widget data sharing, publish-subscribe eventing, and an API providing the basis for consumption of new data sources and adaptable visualization. Wired Widgets offers cross-portal widgets along with a widget communication framework and development toolkit for rapid new widget development, giving operators the ability to answer relevant questions as the mission evolves. Wired Widgets has been applied in a number of dynamic mission domains including disaster response, combat operations, and noncombatant evacuation scenarios. The variety of applications demonstrate that Wired Widgets provides a flexible, data driven solution for visualization in changing environments. In this paper, we show how, deployed in the Ozone Widget Framework portal environment, Wired Widgets can provide an agile, web-based visualization to support the SSA mission. Furthermore, we discuss how the tenets of agile visualization can generally be applied to the SSA problem space to provide operators flexibility, potentially informing future acquisition and system development.
Gao, Jingru; Davis, Gary A
2017-12-01
The rear-end crash is one of the most common freeway crash types, and driver distraction is often cited as a leading cause of rear-end crashes. Previous research indicates that driver distraction could have negative effects on driving performance, but the specific association between driver distraction and crash risk is still not fully revealed. This study sought to understand the mechanism by which driver distraction, defined as secondary task distraction, could influence crash risk, as indicated by a driver's reaction time, in freeway car-following situations. A statistical analysis, exploring the causal model structure regarding drivers' distraction impacts on reaction times, was conducted. Distraction duration, distraction scenario, and secondary task type were chosen as distraction-related factors. Besides, exogenous factors including weather, visual obstruction, lighting condition, traffic density, and intersection presence and endogenous factors including driver age and gender were considered. There was an association between driver distraction and reaction time in the sample freeway rear-end events from SHRP 2 NDS database. Distraction duration, the distracted status when a leader braked, and secondary task type were related to reaction time, while all other factors showed no significant effect on reaction time. The analysis showed that driver distraction duration is the primary direct cause of the increase in reaction time, with other factors having indirect effects mediated by distraction duration. Longer distraction duration, the distracted status when a leader braked, and engaging in auditory-visual-manual secondary task tended to result in longer reaction times. Given drivers will be distracted occasionally, countermeasures which shorten distraction duration or avoid distraction presence while a leader vehicle brakes are worth considering. This study helps better understand the mechanism of freeway rear-end events in car-following situations, and provides a methodology that can be adopted to study the association between driver behavior and driving features. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.
A true blind for subjects who receive spinal manipulation therapy.
Kawchuk, Gregory N; Haugen, Rick; Fritz, Julie
2009-02-01
To determine if short-duration anesthesia (propofol and remifentanil) can blind subjects to the provision or withholding of spinal manipulative therapy (SMT). Placebo control. Day-procedure ward, University of Alberta Hospital. Human subjects with uncomplicated low back pain (LBP) (n=6). In each subject, propofol and remifentanil were administered intravenously. Once unconsciousness was achieved (3-5min), subjects were placed in a lateral recumbent position and then randomized to either a control group (n=3) or an experimental group (with SMT, n=3); subjects received a single SMT to the lumbar spine. Subjects were given a standardized auditory and visual cue and then allowed to recover from anesthesia in a supine position (3-5min). Before anesthesia and 30 minutes after recovery, a blinded evaluator asked each subject to quantify their LBP by using an 11-point scale. This same evaluator then assessed the ability of each subject to recall specific memories while under presumed anesthesia including events related to treatment and specific auditory and visual cues. In either the experimental or control group, subjects could not recall any event while under anesthesia. Some SMT subjects reported pain reduction greater than the minimally important clinical difference and greater than control subjects. No adverse events were reported. Short-duration, low-risk general anesthesia can create effective blinding of subjects to the provision or withholding of SMT. An anesthetic blind for SMT subjects solves many, if not all, problems associated with prior SMT blinding strategies. Although further studies are needed to refine this technique, the potential now exists to conduct the first placebo-controlled randomized controlled trial to assess SMT efficacy.
NASA Astrophysics Data System (ADS)
Li, Jing; Wu, Huayi; Yang, Chaowei; Wong, David W.; Xie, Jibo
2011-09-01
Geoscientists build dynamic models to simulate various natural phenomena for a better understanding of our planet. Interactive visualizations of these geoscience models and their outputs through virtual globes on the Internet can help the public understand the dynamic phenomena related to the Earth more intuitively. However, challenges arise when the volume of four-dimensional data (4D), 3D in space plus time, is huge for rendering. Datasets loaded from geographically distributed data servers require synchronization between ingesting and rendering data. Also the visualization capability of display clients varies significantly in such an online visualization environment; some may not have high-end graphic cards. To enhance the efficiency of visualizing dynamic volumetric data in virtual globes, this paper proposes a systematic framework, in which an octree-based multiresolution data structure is implemented to organize time series 3D geospatial data to be used in virtual globe environments. This framework includes a view-dependent continuous level of detail (LOD) strategy formulated as a synchronized part of the virtual globe rendering process. Through the octree-based data retrieval process, the LOD strategy enables the rendering of the 4D simulation at a consistent and acceptable frame rate. To demonstrate the capabilities of this framework, data of a simulated dust storm event are rendered in World Wind, an open source virtual globe. The rendering performances with and without the octree-based LOD strategy are compared. The experimental results show that using the proposed data structure and processing strategy significantly enhances the visualization performance when rendering dynamic geospatial phenomena in virtual globes.
Lu, Sara A; Wickens, Christopher D; Prinet, Julie C; Hutchins, Shaun D; Sarter, Nadine; Sebok, Angelia
2013-08-01
The aim of this study was to integrate empirical data showing the effects of interrupting task modality on the performance of an ongoing visual-manual task and the interrupting task itself. The goal is to support interruption management and the design of multimodal interfaces. Multimodal interfaces have been proposed as a promising means to support interruption management.To ensure the effectiveness of this approach, their design needs to be based on an analysis of empirical data concerning the effectiveness of individual and redundant channels of information presentation. Three meta-analyses were conducted to contrast performance on an ongoing visual task and interrupting tasks as a function of interrupting task modality (auditory vs. tactile, auditory vs. visual, and single modality vs. redundant auditory-visual). In total, 68 studies were included and six moderator variables were considered. The main findings from the meta-analyses are that response times are faster for tactile interrupting tasks in case of low-urgency messages.Accuracy is higher with tactile interrupting tasks for low-complexity signals but higher with auditory interrupting tasks for high-complexity signals. Redundant auditory-visual combinations are preferable for communication tasks during high workload and with a small visual angle of separation. The three meta-analyses contribute to the knowledge base in multimodal information processing and design. They highlight the importance of moderator variables in predicting the effects of interruption task modality on ongoing and interrupting task performance. The findings from this research will help inform the design of multimodal interfaces in data-rich, event-driven domains.
Sadun, Alfredo A; Chicani, Carlos Filipe; Ross-Cisneros, Fred N; Barboni, Piero; Thoolen, Martin; Shrader, William D; Kubis, Kenneth; Carelli, Valerio; Miller, Guy
2012-03-01
To evaluate the safety and efficacy of a new therapeutic agent, EPI-743, in Leber hereditary optic neuropathy (LHON) using standard clinical, anatomic, and functional visual outcome measures. Open-label clinical trial. University medical center. Patients Five patients with genetically confirmed LHON with acute loss of vision were consecutively enrolled and treated with the experimental therapeutic agent EPI-743 within 90 days of conversion. Intervention During the course of the study, 5 consecutive patients received EPI-743, by mouth, 3 times daily (100-400 mg per dose). Treatment effect was assessed by serial measurements of anatomic and functional visual indices over 6 to 18 months, including Snellen visual acuity, retinal nerve fiber layer thickness measured by optical coherence tomography, Humphrey visual fields (mean decibels and area with 1-log unit depression), and color vision. Treatment effect in this clinical proof of principle study was assessed by comparison of the prospective open-label treatment group with historical controls. Of 5 subjects treated with EPI-743, 4 demonstrated arrest of disease progression and reversal of visual loss. Two patients exhibited a total recovery of visual acuity. No drug-related adverse events were recorded. In a small open-label trial, EPI-743 arrested disease progression and reversed vision loss in all but 1 of the 5 consecutively treated patients with LHON. Given the known natural history of acute and rapid progression of LHON resulting in chronic and persistent bilateral blindness, these data suggest that the previously described irreversible priming to retinal ganglion cell loss may be reversed.
AF-GEOSpace Version 2.0: Space Environment Software Products for 2002
NASA Astrophysics Data System (ADS)
Hilmer, R. V.; Ginet, G. P.; Hall, T.; Holeman, E.; Tautz, M.
2002-05-01
AF-GEOSpace Version 2.0 (release 2002 on WindowsNT/2000/XP) is a graphics-intensive software program developed by AFRL with space environment models and applications. It has grown steadily to become a development tool for automated space weather visualization products and helps with a variety of tasks: orbit specification for radiation hazard avoidance; satellite design assessment and post-event analysis; solar disturbance effects forecasting; frequency and antenna management for radar and HF communications; determination of link outage regions for active ionospheric conditions; and physics research and education. The object-oriented C++ code is divided into five module classes. Science Modules control science models to give output data on user-specified grids. Application Modules manipulate these data and provide orbit generation and magnetic field line tracing capabilities. Data Modules read and assist with the analysis of user-generated data sets. Graphics Modules enable the display of features such as plane slices, magnetic field lines, line plots, axes, the Earth, stars, and satellites. Worksheet Modules provide commonly requested coordinate transformations and calendar conversion tools. Common input data archive sets, application modules, and 1-, 2-, and 3-D visualization tools are provided to all models. The code documentation includes detailed examples with click-by-click instructions for investigating phenomena that have well known effects on communications and spacecraft systems. AF-GEOSpace Version 2.0 builds on the success of its predecessors. The first release (Version 1.21, 1996/IRIX on SGI) contained radiation belt particle flux and dose models derived from CRRES satellite data, an aurora model, an ionosphere model, and ionospheric HF ray tracing capabilities. Next (Version 1.4, 1999/IRIX on SGI) science modules were added related to cosmic rays and solar protons, low-Earth orbit radiation dosages, single event effects probability maps, ionospheric scintillation, and shock propagation models. New application modules for estimating linear energy transfer (LET) and single event upset (SEU) rates in solid-state devices, and graphic modules for visualizing radar fans, communication domes, and satellite detector cones and links were added. Automated FTP scripts permitted users to update their global input parameter set directly from NOAA/SEC. What?s New? Version 2.0 includes the first true dynamic run capabilities and offers new and enhanced graphical and data visualization tools such as 3-D volume rendering and eclipse umbra and penumbra determination. Animations of all model results can now be displayed together in all dimensions. There is a new realistic day-to-day ionospheric scintillation simulation generator (IONSCINT), an upgrade to the WBMOD scintillation code, a simplified HF ionospheric ray tracing module, and applications built on the NASA AE-8 and AP-8 radiation belt models. User-generated satellite data sets can now be visualized along with their orbital ephemeris. A prototype tool for visualizing MHD model results stored in structured grids provides a hint of where future space weather model development efforts are headed. A new graphical user interface (GUI) with improved module tracking and renaming features greatly simplifies software operation. AF-GEOSpace is distributed by the Space Weather Center of Excellence in the Space Vehicles Directorate of AFRL. Recently released for WindowsNT/2000/XP, versions for UNIX and LINUX operating systems will follow shortly. To obtain AF-GEOSpace Version 2.0, please send an e-mail request to the first author.
Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study
Ursino, Mauro; Crisafulli, Andrea; di Pellegrino, Giuseppe; Magosso, Elisa; Cuppini, Cristiano
2017-01-01
The brain integrates information from different sensory modalities to generate a coherent and accurate percept of external events. Several experimental studies suggest that this integration follows the principle of Bayesian estimate. However, the neural mechanisms responsible for this behavior, and its development in a multisensory environment, are still insufficiently understood. We recently presented a neural network model of audio-visual integration (Neural Computation, 2017) to investigate how a Bayesian estimator can spontaneously develop from the statistics of external stimuli. Model assumes the presence of two unimodal areas (auditory and visual) topologically organized. Neurons in each area receive an input from the external environment, computed as the inner product of the sensory-specific stimulus and the receptive field synapses, and a cross-modal input from neurons of the other modality. Based on sensory experience, synapses were trained via Hebbian potentiation and a decay term. Aim of this work is to improve the previous model, including a more realistic distribution of visual stimuli: visual stimuli have a higher spatial accuracy at the central azimuthal coordinate and a lower accuracy at the periphery. Moreover, their prior probability is higher at the center, and decreases toward the periphery. Simulations show that, after training, the receptive fields of visual and auditory neurons shrink to reproduce the accuracy of the input (both at the center and at the periphery in the visual case), thus realizing the likelihood estimate of unimodal spatial position. Moreover, the preferred positions of visual neurons contract toward the center, thus encoding the prior probability of the visual input. Finally, a prior probability of the co-occurrence of audio-visual stimuli is encoded in the cross-modal synapses. The model is able to simulate the main properties of a Bayesian estimator and to reproduce behavioral data in all conditions examined. In particular, in unisensory conditions the visual estimates exhibit a bias toward the fovea, which increases with the level of noise. In cross modal conditions, the SD of the estimates decreases when using congruent audio-visual stimuli, and a ventriloquism effect becomes evident in case of spatially disparate stimuli. Moreover, the ventriloquism decreases with the eccentricity. PMID:29046631
Amsel, Ben D
2011-04-01
Empirically derived semantic feature norms categorized into different types of knowledge (e.g., visual, functional, auditory) can be summed to create number-of-feature counts per knowledge type. Initial evidence suggests several such knowledge types may be recruited during language comprehension. The present study provides a more detailed understanding of the timecourse and intensity of influence of several such knowledge types on real-time neural activity. A linear mixed-effects model was applied to single trial event-related potentials for 207 visually presented concrete words measured on total number of features (semantic richness), imageability, and number of visual motion, color, visual form, smell, taste, sound, and function features. Significant influences of multiple feature types occurred before 200ms, suggesting parallel neural computation of word form and conceptual knowledge during language comprehension. Function and visual motion features most prominently influenced neural activity, underscoring the importance of action-related knowledge in computing word meaning. The dynamic time courses and topographies of these effects are most consistent with a flexible conceptual system wherein temporally dynamic recruitment of representations in modal and supramodal cortex are a crucial element of the constellation of processes constituting word meaning computation in the brain. Copyright © 2011 Elsevier Ltd. All rights reserved.
Kocatürk, Tolga; Bekmez, Sinan; Katrancı, Merve; Çakmak, Harun; Dayanır, Volkan
2015-01-01
To evaluate visual field progression with trend and event analysis in open angle glaucoma patients under treatment. Fifteen year follow-up results of 408 eyes of 217 glaucoma patients who were followed at Adnan Menderes University, Department of Ophthalmology between 1998 and 2013 were analyzed retrospectively. Visual field data were collected for Mean Deviation (MD), Visual Field Index (VFI), and event occurrence. There were 146 primary open-angle glaucoma (POAG), 123 pseudoexfoliative glaucoma (XFG) and 139 normal tension glaucoma (NTG) eyes. MD showed significant change in all diagnostic groups (p<0.001). The difference of VFI between first and last examinations were significantly different in POAG (p<0.001), and XFG (p<0.003) but not in NTG. VFI progression rates were -0.3, -0.43, and -0.2 % loss/year in treated POAG, XFG, and NTG, respectively. The number of empty triangles were statistically different between POAG-NTG (p=0.001), and XFG-NTG (p=0.002) groups. The number of half-filled (p=0.002), and full-filled (p=0.010) triangles were significantly different between XFG-NTG groups. Functional long-term follow-up of glaucoma patients can be monitored with visual field indices. We herein report our fifteen year follow-up results in open angle glaucoma.
Consciousness of the first order in blindsight
Sahraie, Arash; Hibbard, Paul B.; Trevethan, Ceri T.; Ritchie, Kay L.; Weiskrantz, Lawrence
2010-01-01
At suprathreshold levels, detection and awareness of visual stimuli are typically synonymous in nonclinical populations. But following postgeniculate lesions, some patients may perform above chance in forced-choice detection paradigms, while reporting not to see the visual events presented within their blind field. This phenomenon, termed “blindsight,” is intriguing because it demonstrates a dissociation between detection and perception. It is possible, however, for a blindsight patient to have some “feeling” of the occurrence of an event without seeing per se. This is termed blindsight type II to distinguish it from the type I, defined as discrimination capability in the total absence of any acknowledged awareness. Here we report on a well-studied patient, D.B., whose blindsight capabilities have been previously documented. We have found that D.B. is capable of detecting visual patterns defined by changes in luminance (first-order gratings) and those defined by contrast modulation of textured patterns (textured gratings; second-order stimuli) while being aware of the former but reporting no awareness of the latter. We have systematically investigated the parameters that could lead to visual awareness of the patterns and show that mechanisms underlying the subjective reports of visual awareness rely primarily on low spatial frequency, first-order spatial components of the image. PMID:21078979
Babiloni, Claudio; Marzano, Nicola; Soricelli, Andrea; Cordone, Susanna; Millán-Calenti, José Carlos; Del Percio, Claudio; Buján, Ana
2016-01-01
This article reviews three experiments on event-related potentials (ERPs) testing the hypothesis that primary visual consciousness (stimulus self-report) is related to enhanced cortical neural synchronization as a function of stimulus features. ERP peak latency and sources were compared between “seen” trials and “not seen” trials, respectively related and unrelated to the primary visual consciousness. Three salient features of visual stimuli were considered (visuospatial, emotional face expression, and written words). Results showed the typical visual ERP components in both “seen” and “not seen” trials. There was no statistical difference in the ERP peak latencies between the “seen” and “not seen” trials, suggesting a similar timing of the cortical neural synchronization regardless the primary visual consciousness. In contrast, ERP sources showed differences between “seen” and “not seen” trials. For the visuospatial stimuli, the primary consciousness was related to higher activity in dorsal occipital and parietal sources at about 400 ms post-stimulus. For the emotional face expressions, there was greater activity in parietal and frontal sources at about 180 ms post-stimulus. For the written letters, there was higher activity in occipital, parietal and temporal sources at about 230 ms post-stimulus. These results hint that primary visual consciousness is associated with an enhanced cortical neural synchronization having entirely different spatiotemporal characteristics as a function of the features of the visual stimuli and possibly, the relative qualia (i.e., visuospatial, face expression, and words). In this framework, the dorsal visual stream may be synchronized in association with the primary consciousness of visuospatial and emotional face contents. Analogously, both dorsal and ventral visual streams may be synchronized in association with the primary consciousness of linguistic contents. In this line of reasoning, the ensemble of the cortical neural networks underpinning the single visual features would constitute a sort of multi-dimensional palette of colors, shapes, regions of the visual field, movements, emotional face expressions, and words. The synchronization of one or more of these cortical neural networks, each with its peculiar timing, would produce the primary consciousness of one or more of the visual features of the scene. PMID:27445750
Helioviewer: A Web 2.0 Tool for Visualizing Heterogeneous Heliophysics Data
NASA Astrophysics Data System (ADS)
Hughitt, V. K.; Ireland, J.; Lynch, M. J.; Schmeidel, P.; Dimitoglou, G.; Müeller, D.; Fleck, B.
2008-12-01
Solar physics datasets are becoming larger, richer, more numerous and more distributed. Feature/event catalogs (describing objects of interest in the original data) are becoming important tools in navigating these data. In the wake of this increasing influx of data and catalogs there has been a growing need for highly sophisticated tools for accessing and visualizing this wealth of information. Helioviewer is a novel tool for integrating and visualizing disparate sources of solar and Heliophysics data. Taking advantage of the newly available power of modern web application frameworks, Helioviewer merges image and feature catalog data, and provides for Heliophysics data a familiar interface not unlike Google Maps or MapQuest. In addition to streamlining the process of combining heterogeneous Heliophysics datatypes such as full-disk images and coronagraphs, the inclusion of visual representations of automated and human-annotated features provides the user with an integrated and intuitive view of how different factors may be interacting on the Sun. Currently, Helioviewer offers images from The Extreme ultraviolet Imaging Telescope (EIT), The Large Angle and Spectrometric COronagraph experiment (LASCO) and the Michelson Doppler Imager (MDI) instruments onboard The Solar and Heliospheric Observatory (SOHO), as well as The Transition Region and Coronal Explorer (TRACE). Helioviewer also incorporates feature/event information from the LASCO CME List, NOAA Active Regions, CACTus CME and Type II Radio Bursts feature/event catalogs. The project is undergoing continuous development with many more data sources and additional functionality planned for the near future.
A risk-based coverage model for video surveillance camera control optimization
NASA Astrophysics Data System (ADS)
Zhang, Hongzhou; Du, Zhiguo; Zhao, Xingtao; Li, Peiyue; Li, Dehua
2015-12-01
Visual surveillance system for law enforcement or police case investigation is different from traditional application, for it is designed to monitor pedestrians, vehicles or potential accidents. Visual surveillance risk is defined as uncertainty of visual information of targets and events monitored in present work and risk entropy is introduced to modeling the requirement of police surveillance task on quality and quantity of vide information. the prosed coverage model is applied to calculate the preset FoV position of PTZ camera.
Bridging the semantic gap in sports
NASA Astrophysics Data System (ADS)
Li, Baoxin; Errico, James; Pan, Hao; Sezan, M. Ibrahim
2003-01-01
One of the major challenges facing current media management systems and the related applications is the so-called "semantic gap" between the rich meaning that a user desires and the shallowness of the content descriptions that are automatically extracted from the media. In this paper, we address the problem of bridging this gap in the sports domain. We propose a general framework for indexing and summarizing sports broadcast programs. The framework is based on a high-level model of sports broadcast video using the concept of an event, defined according to domain-specific knowledge for different types of sports. Within this general framework, we develop automatic event detection algorithms that are based on automatic analysis of the visual and aural signals in the media. We have successfully applied the event detection algorithms to different types of sports including American football, baseball, Japanese sumo wrestling, and soccer. Event modeling and detection contribute to the reduction of the semantic gap by providing rudimentary semantic information obtained through media analysis. We further propose a novel approach, which makes use of independently generated rich textual metadata, to fill the gap completely through synchronization of the information-laden textual data with the basic event segments. An MPEG-7 compliant prototype browsing system has been implemented to demonstrate semantic retrieval and summarization of sports video.
Live Imaging of Meiosis I in Late-Stage Drosophila melanogaster Oocytes.
Hughes, Stacie E; Hawley, R Scott
2017-01-01
Drosophila melanogaster has been studied for a century as a genetic model to understand recombination, chromosome segregation, and the basic rules of inheritance. However, it has only been about 25 years since the events that occur during nuclear envelope breakdown, spindle assembly, and chromosome orientation during D. melanogaster female meiosis I were first visualized by fixed cytological methods (Theurkauf and Hawley, J Cell Biol 116:1167-1180, 1992). Although these fixed cytological studies revealed many important details about the events that occur during meiosis I, they failed to elucidate the timing or order of these events. The development of protocols for live imaging of meiotic events within the oocyte has enabled collection of real-time information on the kinetics and dynamics of spindle assembly, as well as the behavior of chromosomes during prometaphase I. Here, we describe a method to visualize spindle assembly and chromosome movement during meiosis I by injecting fluorescent dyes to label microtubules and DNA into stage 12-14 oocytes. This method enables the events during Drosophila female meiosis I, such as spindle assembly and chromosome movement, to be observed in vivo, regardless of genetic background, with exceptional spatial and temporal resolution.
Fandom Biases Retrospective Judgments Not Perception.
Huff, Markus; Papenmeier, Frank; Maurer, Annika E; Meitz, Tino G K; Garsoffky, Bärbel; Schwan, Stephan
2017-02-24
Attitudes and motivations have been shown to affect the processing of visual input, indicating that observers may see a given situation each literally in a different way. Yet, in real-life, processing information in an unbiased manner is considered to be of high adaptive value. Attitudinal and motivational effects were found for attention, characterization, categorization, and memory. On the other hand, for dynamic real-life events, visual processing has been found to be highly synchronous among viewers. Thus, while in a seminal study fandom as a particularly strong case of attitudes did bias judgments of a sports event, it left the question open whether attitudes do bias prior processing stages. Here, we investigated influences of fandom during the live TV broadcasting of the 2013 UEFA-Champions-League Final regarding attention, event segmentation, immediate and delayed cued recall, as well as affect, memory confidence, and retrospective judgments. Even though we replicated biased retrospective judgments, we found that eye-movements, event segmentation, and cued recall were largely similar across both groups of fans. Our findings demonstrate that, while highly involving sports events are interpreted in a fan dependent way, at initial stages they are processed in an unbiased manner.
Fandom Biases Retrospective Judgments Not Perception
Huff, Markus; Papenmeier, Frank; Maurer, Annika E.; Meitz, Tino G. K.; Garsoffky, Bärbel; Schwan, Stephan
2017-01-01
Attitudes and motivations have been shown to affect the processing of visual input, indicating that observers may see a given situation each literally in a different way. Yet, in real-life, processing information in an unbiased manner is considered to be of high adaptive value. Attitudinal and motivational effects were found for attention, characterization, categorization, and memory. On the other hand, for dynamic real-life events, visual processing has been found to be highly synchronous among viewers. Thus, while in a seminal study fandom as a particularly strong case of attitudes did bias judgments of a sports event, it left the question open whether attitudes do bias prior processing stages. Here, we investigated influences of fandom during the live TV broadcasting of the 2013 UEFA-Champions-League Final regarding attention, event segmentation, immediate and delayed cued recall, as well as affect, memory confidence, and retrospective judgments. Even though we replicated biased retrospective judgments, we found that eye-movements, event segmentation, and cued recall were largely similar across both groups of fans. Our findings demonstrate that, while highly involving sports events are interpreted in a fan dependent way, at initial stages they are processed in an unbiased manner. PMID:28233877
Walking through doorways causes forgetting: environmental integration.
Radvansky, Gabriel A; Tamplin, Andrea K; Krawietz, Sabine A
2010-12-01
Memory for objects declines when people move from one location to another (the location updating effect). However, it is unclear whether this is attributable to event model updating or to task demands. The focus here was on the degree of integration for probed-for information with the experienced environment. In prior research, the probes were verbal labels of visual objects. Experiment 1 assessed whether this was a consequence of an item-probe mismatch, as with transfer-appropriate processing. Visual probes were used to better coordinate what was seen with the nature of the memory probe. In Experiment 2, people received additional word pairs to remember, which were less well integrated with the environment, to assess whether the probed-for information needed to be well integrated. The results showed location updating effects in both cases. These data are consistent with an event cognition view that mental updating of a dynamic event disrupts memory.
Helioviewer.org: Enhanced Solar & Heliospheric Data Visualization
NASA Astrophysics Data System (ADS)
Stys, J. E.; Ireland, J.; Hughitt, V. K.; Mueller, D.
2013-12-01
Helioviewer.org enables the simultaneous exploration of multiple heterogeneous solar data sets. In the latest iteration of this open-source web application, Hinode XRT and Yohkoh SXT join SDO, SOHO, STEREO, and PROBA2 as supported data sources. A newly enhanced user-interface expands the utility of Helioviewer.org by adding annotations backed by data from the Heliospheric Events Knowledgebase (HEK). Helioviewer.org can now overlay solar feature and event data via interactive marker pins, extended regions, data labels, and information panels. An interactive time-line provides enhanced browsing and visualization to image data set coverage and solar events. The addition of a size-of-the-Earth indicator provides a sense of the scale to solar and heliospheric features for education and public outreach purposes. Tight integration with the Virtual Solar Observatory and SDO AIA cutout service enable solar physicists to seamlessly import science data into their SSW/IDL or SunPy/Python data analysis environments.
Naturalistic Cycling Study: Identifying Risk Factors for On-Road Commuter Cyclists
Johnson, Marilyn; Charlton, Judith; Oxley, Jennifer; Newstead, Stuart
2010-01-01
The study aim was to identify risk factors for collisions/near-collisions involving on-road commuter cyclists and drivers. A naturalistic cycling study was conducted in Melbourne, Australia, with cyclists wearing helmet-mounted video cameras. Video recordings captured cyclists’ perspective of the road and traffic behaviours including head checks, reactions and manoeuvres. The 100-car naturalistic driving study analysis technique was adapted for data analysis and events were classified by severity: collision, near-collision and incident. Participants were adult cyclists and each filmed 12 hours of commuter cycling trips over a 4-week period. In total, 127 hours and 38 minutes were analysed for 13 participants, 54 events were identified: 2 collisions, 6 near-collisions and 46 incidents. Prior to events, 88.9% of cyclists travelled in a safe/legal manner. Sideswipe was the most frequent event type (40.7%). Most events occurred at an intersection/intersection-related location (70.3%). The vehicle driver was judged at fault in the majority of events (87.0%) and no post-event driver reaction was observed (83.3%). Cross tabulations revealed significant associations between event severity and: cyclist reaction, cyclist post-event manoeuvre, pre-event driver behaviour, other vehicle involved, driver reaction, visual obstruction, cyclist head check (left), event type and vehicle location (p<0.05). Frequent head checks suggest cyclists had high situational awareness and their reactive behaviour to driver actions led to successful avoidance of collisions/near-collisions. Strategies to improve driver awareness of on-road cyclists and to indicate early before turning/changing lanes when sharing the roadway with cyclists are discussed. Findings will contribute to the development of effective countermeasures to reduce cyclist trauma. PMID:21050610
Naturalistic cycling study: identifying risk factors for on-road commuter cyclists.
Johnson, Marilyn; Charlton, Judith; Oxley, Jennifer; Newstead, Stuart
2010-01-01
The study aim was to identify risk factors for collisions/near-collisions involving on-road commuter cyclists and drivers. A naturalistic cycling study was conducted in Melbourne, Australia, with cyclists wearing helmet-mounted video cameras. Video recordings captured cyclists' perspective of the road and traffic behaviours including head checks, reactions and manoeuvres. The 100-car naturalistic driving study analysis technique was adapted for data analysis and events were classified by severity: collision, near-collision and incident. Participants were adult cyclists and each filmed 12 hours of commuter cycling trips over a 4-week period. In total, 127 hours and 38 minutes were analysed for 13 participants, 54 events were identified: 2 collisions, 6 near-collisions and 46 incidents. Prior to events, 88.9% of cyclists travelled in a safe/legal manner. Sideswipe was the most frequent event type (40.7%). Most events occurred at an intersection/intersection-related location (70.3%). The vehicle driver was judged at fault in the majority of events (87.0%) and no post-event driver reaction was observed (83.3%). Cross tabulations revealed significant associations between event severity and: cyclist reaction, cyclist post-event manoeuvre, pre-event driver behaviour, other vehicle involved, driver reaction, visual obstruction, cyclist head check (left), event type and vehicle location (p<0.05). Frequent head checks suggest cyclists had high situational awareness and their reactive behaviour to driver actions led to successful avoidance of collisions/near-collisions. Strategies to improve driver awareness of on-road cyclists and to indicate early before turning/changing lanes when sharing the roadway with cyclists are discussed. Findings will contribute to the development of effective countermeasures to reduce cyclist trauma.
Cleansing the Superdome: The Paradox of Purity and Post-Katrina Guilt
ERIC Educational Resources Information Center
Grano, Daniel A.; Zagacki, Kenneth S.
2011-01-01
The reopening of the New Orleans Superdome after Hurricane Katrina on Monday Night Football dramatized problematic rhetorical, visual, and spatial norms of purification rituals bound up in what Burke calls the paradox of purity. Hurricane Katrina was significant as a visually traumatic event in large part because it signified the ghetto as a…
ERIC Educational Resources Information Center
Wedler, Henry B.; Boyes, Lee; Davis, Rebecca L.; Flynn, Dan; Franz, Annaliese; Hamann, Christian S.; Harrison, Jason G.; Lodewyk, Michael W.; Milinkevich, Kristin A.; Shaw, Jared T.; Tantillo, Dean J.; Wang, Selina C.
2014-01-01
Curricula for three chemistry camp experiences for blind and visually impaired (BVI) individuals that incorporated single- and multiday activities and experiments accessible to BVI students are described. Feedback on the camps from students, mentors, and instructors indicates that these events allowed BVI students, who in many cases have been…
A Multidisciplinary Approach to Literacy through Picture Books and Drama
ERIC Educational Resources Information Center
Burke, Anne; Peterson, Shelley Stagg
2007-01-01
Anne Burke and Shelley Stagg Peterson argue that "picture books offer a medium for teaching visual and critical literacy across the curriculum." To support this idea, they describe a multidisciplinary unit on World War II that pushes high school students to utilize visual and print literacies to analyze, comprehend, and relate to public events and…
The Time-Course of Auditory and Visual Distraction Effects in a New Crossmodal Paradigm
ERIC Educational Resources Information Center
Bendixen, Alexandra; Grimm, Sabine; Deouell, Leon Y.; Wetzel, Nicole; Madebach, Andreas; Schroger, Erich
2010-01-01
Vision often dominates audition when attentive processes are involved (e.g., the ventriloquist effect), yet little is known about the relative potential of the two modalities to initiate a "break through of the unattended". The present study was designed to systematically compare the capacity of task-irrelevant auditory and visual events to…
ERIC Educational Resources Information Center
Munoz-Ruata, J.; Caro-Martinez, E.; Perez, L. Martinez; Borja, M.
2010-01-01
Background: Perception disorders are frequently observed in persons with intellectual disability (ID) and their influence on cognition has been discussed. The objective of this study is to clarify the mechanisms behind these alterations by analysing the visual event related potentials early component, the N1 wave, which is related to perception…
Neural Correlates of Encoding Predict Infants' Memory in the Paired-Comparison Procedure
ERIC Educational Resources Information Center
Snyder, Kelly A.
2010-01-01
The present study used event-related potentials (ERPs) to monitor infant brain activity during the initial encoding of a previously novel visual stimulus, and examined whether ERP measures of encoding predicted infants' subsequent performance on a visual memory task (i.e., the paired-comparison task). A late slow wave component of the ERP measured…
Attention and Memory Play Different Roles in Syntactic Choice during Sentence Production
ERIC Educational Resources Information Center
Myachykov, Andriy; Garrod, Simon; Scheepers, Christoph
2018-01-01
Attentional control of referential information is an important contributor to the structure of discourse. We investigated how attention and memory interplay during visually situated sentence production. We manipulated speakers' attention to the agent or the patient of a described event by means of a referential or a dot visual cue. We also…
Sanchez-Avila, Ronald Mauricio; Merayo-Lloves, Jesus; Riestra, Ana Cristina; Anitua, Eduardo; Muruzabal, Francisco; Orive, Gorka; Fernández-Vega, Luis
2017-06-01
The objective was to provide preliminary information about the efficacy and safety of immunologically safe plasma rich in growth factor (immunosafe PRGF) eye drops in the treatment of moderate to severe dry eye in patients with primary and secondary Sjögren's syndrome (SS) and to analyze the influence of several variables on treatment outcomes. This retrospective study included patients with SS. All patients were treated with previously immunosafe PRGF eye drops to reduce the immunologic component contents. Ocular Surface Disease Index (OSDI) scale, best-corrected visual acuity (BCVA), visual analog scale (VAS) frequency, and VAS severity outcome measures were evaluated before and after treatment with immunosafe PRGF. The potential influence of some patient clinical variables on results was also assessed. Safety assessment was also performed reporting all adverse events. Twenty-six patients (12 patients with primary SS, and 14 patients suffering secondary SS) with a total of 52 affected eyes were included and evaluated. Immunosafe PRGF treatment showed a significant reduction (P < 0.05) in OSDI scale (41.86%), in BCVA (62.97%), in VAS frequency (34.75%), and in VAS severity (41.50%). BCVA and VAS frequency scores improved significantly (P < 0.05) after concomitant treatment of PRGF with corticosteroids. Only 2 adverse events were reported in 2 patients (7.7% of patients). Signs and symptoms of dry eye syndrome in patients with SS were reduced after treatment with PRGF-Endoret eye drops. Immunosafe PRGF-Endoret is safe and effective for treating patients with primary and secondary SS.
Booth, Ashley J; Elliott, Mark T
2015-01-01
The ease of synchronizing movements to a rhythmic cue is dependent on the modality of the cue presentation: timing accuracy is much higher when synchronizing with discrete auditory rhythms than an equivalent visual stimulus presented through flashes. However, timing accuracy is improved if the visual cue presents spatial as well as temporal information (e.g., a dot following an oscillatory trajectory). Similarly, when synchronizing with an auditory target metronome in the presence of a second visual distracting metronome, the distraction is stronger when the visual cue contains spatial-temporal information rather than temporal only. The present study investigates individuals' ability to synchronize movements to a temporal-spatial visual cue in the presence of same-modality temporal-spatial distractors. Moreover, we investigated how increasing the number of distractor stimuli impacted on maintaining synchrony with the target cue. Participants made oscillatory vertical arm movements in time with a vertically oscillating white target dot centered on a large projection screen. The target dot was surrounded by 2, 8, or 14 distractor dots, which had an identical trajectory to the target but at a phase lead or lag of 0, 100, or 200 ms. We found participants' timing performance was only affected in the phase-lead conditions and when there were large numbers of distractors present (8 and 14). This asymmetry suggests participants still rely on salient events in the stimulus trajectory to synchronize movements. Subsequently, distractions occurring in the window of attention surrounding those events have the maximum impact on timing performance.
Viewing the dynamics and control of visual attention through the lens of electrophysiology
Woodman, Geoffrey F.
2013-01-01
How we find what we are looking for in complex visual scenes is a seemingly simple ability that has taken half a century to unravel. The first study to use the term visual search showed that as the number of objects in a complex scene increases, observers’ reaction times increase proportionally (Green and Anderson, 1956). This observation suggests that our ability to process the objects in the scenes is limited in capacity. However, if it is known that the target will have a certain feature attribute, for example, that it will be red, then only an increase in the number of red items increases reaction time. This observation suggests that we can control which visual inputs receive the benefit of our limited capacity to recognize the objects, such as those defined by the color red, as the items we seek. The nature of the mechanisms that underlie these basic phenomena in the literature on visual search have been more difficult to definitively determine. In this paper, I discuss how electrophysiological methods have provided us with the necessary tools to understand the nature of the mechanisms that give rise to the effects observed in the first visual search paper. I begin by describing how recordings of event-related potentials from humans and nonhuman primates have shown us how attention is deployed to possible target items in complex visual scenes. Then, I will discuss how event-related potential experiments have allowed us to directly measure the memory representations that are used to guide these deployments of attention to items with target-defining features. PMID:23357579
Liu, Hong; Zhang, Gaoyan; Liu, Baolin
2017-04-01
In the Chinese language, a polyphone is a kind of special character that has more than one pronunciation, with each pronunciation corresponding to a different meaning. Here, we aimed to reveal the cognitive processing of audio-visual information integration of polyphones in a sentence context using the event-related potential (ERP) method. Sentences ending with polyphones were presented to subjects simultaneously in both an auditory and a visual modality. Four experimental conditions were set in which the visual presentations were the same, but the pronunciations of the polyphones were: the correct pronunciation; another pronunciation of the polyphone; a semantically appropriate pronunciation but not the pronunciation of the polyphone; or a semantically inappropriate pronunciation but also not the pronunciation of the polyphone. The behavioral results demonstrated significant differences in response accuracies when judging the semantic meanings of the audio-visual sentences, which reflected the different demands on cognitive resources. The ERP results showed that in the early stage, abnormal pronunciations were represented by the amplitude of the P200 component. Interestingly, because the phonological information mediated access to the lexical semantics, the amplitude and latency of the N400 component changed linearly across conditions, which may reflect the gradually increased semantic mismatch in the four conditions when integrating the auditory pronunciation with the visual information. Moreover, the amplitude of the late positive shift (LPS) showed a significant correlation with the behavioral response accuracies, demonstrating that the LPS component reveals the demand of cognitive resources for monitoring and resolving semantic conflicts when integrating the audio-visual information.
Heim, Stefan; Weidner, Ralph; von Overheidt, Ann-Christin; Tholen, Nicole; Grande, Marion; Amunts, Katrin
2014-03-01
Phonological and visual dysfunctions may result in reading deficits like those encountered in developmental dyslexia. Here, we use a novel approach to induce similar reading difficulties in normal readers in an event-related fMRI study, thus systematically investigating which brain regions relate to different pathways relating to orthographic-phonological (e.g. grapheme-to-phoneme conversion, GPC) vs. visual processing. Based upon a previous behavioural study (Tholen et al. 2011), the retrieval of phonemes from graphemes was manipulated by lowering the identifiability of letters in familiar vs. unfamiliar shapes. Visual word and letter processing was impeded by presenting the letters of a word in a moving, non-stationary manner. FMRI revealed that the visual condition activated cytoarchitectonically defined area hOC5 in the magnocellular pathway and area 7A in the right mesial parietal cortex. In contrast, the grapheme manipulation revealed different effects localised predominantly in bilateral inferior frontal gyrus (left cytoarchitectonic area 44; right area 45) and inferior parietal lobule (including areas PF/PFm), regions that have been demonstrated to show abnormal activation in dyslexic as compared to normal readers. This pattern of activation bears close resemblance to recent findings in dyslexic samples both behaviourally and with respect to the neurofunctional activation patterns. The novel paradigm may thus prove useful in future studies to understand reading problems related to distinct pathways, potentially providing a link also to the understanding of real reading impairments in dyslexia.
Romero-Rivas, Carlos; Vera-Constán, Fátima; Rodríguez-Cuadrado, Sara; Puigcerver, Laura; Fernández-Prieto, Irune; Navarra, Jordi
2018-05-10
Musical melodies have "peaks" and "valleys". Although the vertical component of pitch and music is well-known, the mechanisms underlying its mental representation still remain elusive. We show evidence regarding the importance of previous experience with melodies for crossmodal interactions to emerge. The impact of these crossmodal interactions on other perceptual and attentional processes was also studied. Melodies including two tones with different frequency (e.g., E4 and D3) were repeatedly presented during the study. These melodies could either generate strong predictions (e.g., E4-D3-E4-D3-E4-[D3]) or not (e.g., E4-D3-E4-E4-D3-[?]). After the presentation of each melody, the participants had to judge the colour of a visual stimulus that appeared in a position that was, according to the traditional vertical connotations of pitch, either congruent (e.g., high-low-high-low-[up]), incongruent (high-low-high-low-[down]) or unpredicted with respect to the melody. Behavioural and electroencephalographic responses to the visual stimuli were obtained. Congruent visual stimuli elicited faster responses at the end of the experiment than at the beginning. Additionally, incongruent visual stimuli that broke the spatial prediction generated by the melody elicited larger P3b amplitudes (reflecting 'surprise' responses). Our results suggest that the passive (but repeated) exposure to melodies elicits spatial predictions that modulate the processing of other sensory events. Copyright © 2018 Elsevier Ltd. All rights reserved.
Forecasting and visualization of wildfires in a 3D geographical information system
NASA Astrophysics Data System (ADS)
Castrillón, M.; Jorge, P. A.; López, I. J.; Macías, A.; Martín, D.; Nebot, R. J.; Sabbagh, I.; Quintana, F. M.; Sánchez, J.; Sánchez, A. J.; Suárez, J. P.; Trujillo, A.
2011-03-01
This paper describes a wildfire forecasting application based on a 3D virtual environment and a fire simulation engine. A novel open-source framework is presented for the development of 3D graphics applications over large geographic areas, offering high performance 3D visualization and powerful interaction tools for the Geographic Information Systems (GIS) community. The application includes a remote module that allows simultaneous connections of several users for monitoring a real wildfire event. The system is able to make a realistic composition of what is really happening in the area of the wildfire with dynamic 3D objects and location of human and material resources in real time, providing a new perspective to analyze the wildfire information. The user is enabled to simulate and visualize the propagation of a fire on the terrain integrating at the same time spatial information on topography and vegetation types with weather and wind data. The application communicates with a remote web service that is in charge of the simulation task. The user may specify several parameters through a friendly interface before the application sends the information to the remote server responsible of carrying out the wildfire forecasting using the FARSITE simulation model. During the process, the server connects to different external resources to obtain up-to-date meteorological data. The client application implements a realistic 3D visualization of the fire evolution on the landscape. A Level Of Detail (LOD) strategy contributes to improve the performance of the visualization system.
Jao Keehn, R Joanne; Sanchez, Sandra S; Stewart, Claire R; Zhao, Weiqi; Grenesko-Stevens, Emily L; Keehn, Brandon; Müller, Ralph-Axel
2017-01-01
Autism spectrum disorders (ASD) are pervasive developmental disorders characterized by impairments in language development and social interaction, along with restricted and stereotyped behaviors. These behaviors often include atypical responses to sensory stimuli; some children with ASD are easily overwhelmed by sensory stimuli, while others may seem unaware of their environment. Vision and audition are two sensory modalities important for social interactions and language, and are differentially affected in ASD. In the present study, 16 children and adolescents with ASD and 16 typically developing (TD) participants matched for age, gender, nonverbal IQ, and handedness were tested using a mixed event-related/blocked functional magnetic resonance imaging paradigm to examine basic perceptual processes that may form the foundation for later-developing cognitive abilities. Auditory (high or low pitch) and visual conditions (dot located high or low in the display) were presented, and participants indicated whether the stimuli were "high" or "low." Results for the auditory condition showed downregulated activity of the visual cortex in the TD group, but upregulation in the ASD group. This atypical activity in visual cortex was associated with autism symptomatology. These findings suggest atypical crossmodal (auditory-visual) modulation linked to sociocommunicative deficits in ASD, in agreement with the general hypothesis of low-level sensorimotor impairments affecting core symptomatology. Autism Res 2017, 10: 130-143. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.
Applying Spatial Audio to Human Interfaces: 25 Years of NASA Experience
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Wenzel, Elizabeth M.; Godfrey, Martine; Miller, Joel D.; Anderson, Mark R.
2010-01-01
From the perspective of human factors engineering, the inclusion of spatial audio within a human-machine interface is advantageous from several perspectives. Demonstrated benefits include the ability to monitor multiple streams of speech and non-speech warning tones using a cocktail party advantage, and for aurally-guided visual search. Other potential benefits include the spatial coordination and interaction of multimodal events, and evaluation of new communication technologies and alerting systems using virtual simulation. Many of these technologies were developed at NASA Ames Research Center, beginning in 1985. This paper reviews examples and describes the advantages of spatial sound in NASA-related technologies, including space operations, aeronautics, and search and rescue. The work has involved hardware and software development as well as basic and applied research.
Behavioral and Brain Measures of Phasic Alerting Effects on Visual Attention.
Wiegand, Iris; Petersen, Anders; Finke, Kathrin; Bundesen, Claus; Lansner, Jon; Habekost, Thomas
2017-01-01
In the present study, we investigated effects of phasic alerting on visual attention in a partial report task, in which half of the displays were preceded by an auditory warning cue. Based on the computational Theory of Visual Attention (TVA), we estimated parameters of spatial and non-spatial aspects of visual attention and measured event-related lateralizations (ERLs) over visual processing areas. We found that the TVA parameter sensory effectiveness a , which is thought to reflect visual processing capacity, significantly increased with phasic alerting. By contrast, the distribution of visual processing resources according to task relevance and spatial position, as quantified in parameters top-down control α and spatial bias w index , was not modulated by phasic alerting. On the electrophysiological level, the latencies of ERLs in response to the task displays were reduced following the warning cue. These results suggest that phasic alerting facilitates visual processing in a general, unselective manner and that this effect originates in early stages of visual information processing.
Situated sentence processing: the coordinated interplay account and a neurobehavioral model.
Crocker, Matthew W; Knoeferle, Pia; Mayberry, Marshall R
2010-03-01
Empirical evidence demonstrating that sentence meaning is rapidly reconciled with the visual environment has been broadly construed as supporting the seamless interaction of visual and linguistic representations during situated comprehension. Based on recent behavioral and neuroscientific findings, however, we argue for the more deeply rooted coordination of the mechanisms underlying visual and linguistic processing, and for jointly considering the behavioral and neural correlates of scene-sentence reconciliation during situated comprehension. The Coordinated Interplay Account (CIA; Knoeferle, P., & Crocker, M. W. (2007). The influence of recent scene events on spoken comprehension: Evidence from eye movements. Journal of Memory and Language, 57(4), 519-543) asserts that incremental linguistic interpretation actively directs attention in the visual environment, thereby increasing the salience of attended scene information for comprehension. We review behavioral and neuroscientific findings in support of the CIA's three processing stages: (i) incremental sentence interpretation, (ii) language-mediated visual attention, and (iii) the on-line influence of non-linguistic visual context. We then describe a recently developed connectionist model which both embodies the central CIA proposals and has been successfully applied in modeling a range of behavioral findings from the visual world paradigm (Mayberry, M. R., Crocker, M. W., & Knoeferle, P. (2009). Learning to attend: A connectionist model of situated language comprehension. Cognitive Science). Results from a new simulation suggest the model also correlates with event-related brain potentials elicited by the immediate use of visual context for linguistic disambiguation (Knoeferle, P., Habets, B., Crocker, M. W., & Münte, T. F. (2008). Visual scenes trigger immediate syntactic reanalysis: Evidence from ERPs during situated spoken comprehension. Cerebral Cortex, 18(4), 789-795). Finally, we argue that the mechanisms underlying interpretation, visual attention, and scene apprehension are not only in close temporal synchronization, but have co-adapted to optimize real-time visual grounding of situated spoken language, thus facilitating the association of linguistic, visual and motor representations that emerge during the course of our embodied linguistic experience in the world. Copyright 2009 Elsevier Inc. All rights reserved.
Were they in the loop during automated driving? Links between visual attention and crash potential.
Louw, Tyron; Madigan, Ruth; Carsten, Oliver; Merat, Natasha
2017-08-01
A proposed advantage of vehicle automation is that it relieves drivers from the moment-to-moment demands of driving, to engage in other, non-driving related, tasks. However, it is important to gain an understanding of drivers' capacity to resume manual control, should such a need arise. As automation removes vehicle control-based measures as a performance indicator, other metrics must be explored. This driving simulator study, conducted under the European Commission (EC) funded AdaptIVe project, assessed drivers' gaze fixations during partially-automated (SAE Level 2) driving, on approach to critical and non-critical events. Using a between-participant design, 75 drivers experienced automation with one of five out-of-the-loop (OOTL) manipulations, which used different levels of screen visibility and secondary tasks to induce varying levels of engagement with the driving task: 1) no manipulation, 2) manipulation by light fog, 3) manipulation by heavy fog, 4) manipulation by heavy fog plus a visual task, 5) no manipulation plus an n-back task. The OOTL manipulations influenced drivers' first point of gaze fixation after they were asked to attend to an evolving event. Differences resolved within one second and visual attention allocation adapted with repeated events, yet crash outcome was not different between OOTL manipulation groups. Drivers who crashed in the first critical event showed an erratic pattern of eye fixations towards the road centre on approach to the event, while those who did not demonstrated a more stable pattern. Automated driving systems should be able to direct drivers' attention to hazards no less than 6 seconds in advance of an adverse outcome. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Kemmer, Laura; Coulson, Seana; Kutas, Marta
2014-02-01
Despite indications in the split-brain and lesion literatures that the right hemisphere is capable of some syntactic analysis, few studies have investigated right hemisphere contributions to syntactic processing in people with intact brains. Here we used the visual half-field paradigm in healthy adults to examine each hemisphere's processing of correct and incorrect grammatical number agreement marked either lexically, e.g., antecedent/reflexive pronoun ("The grateful niece asked herself/*themselves…") or morphologically, e.g., subject/verb ("Industrial scientists develop/*develops…"). For reflexives, response times and accuracy of grammaticality decisions suggested similar processing regardless of visual field of presentation. In the subject/verb condition, we observed similar response times and accuracies for central and right visual field (RVF) presentations. For left visual field (LVF) presentation, response times were longer and accuracy rates were reduced relative to RVF presentation. An event-related brain potential (ERP) study using the same materials revealed similar ERP responses to the reflexive pronouns in the two visual fields, but very different ERP effects to the subject/verb violations. For lexically marked violations on reflexives, P600 was elicited by stimuli in both the LVF and RVF; for morphologically marked violations on verbs, P600 was elicited only by RVF stimuli. These data suggest that both hemispheres can process lexically marked pronoun agreement violations, and do so in a similar fashion. Morphologically marked subject/verb agreement errors, however, showed a distinct LH advantage. Copyright © 2013 Elsevier B.V. All rights reserved.
Kemmer, Laura; Coulson, Seana; Kutas, Marta
2014-01-01
Despite indications in the split-brain and lesion literatures that the right hemisphere is capable of some syntactic analysis, few studies have investigated right hemisphere contributions to syntactic processing in people with intact brains. Here we used the visual half-field paradigm in healthy adults to examine each hemisphere’s processing of correct and incorrect grammatical number agreement marked either lexically, e.g., antecedent/reflexive pronoun (“The grateful niece asked herself/*themselves…”) or morphologically, e.g., subject/verb (“Industrial scientists develop/*develops…”). For reflexives, response times and accuracy of grammaticality decisions suggested similar processing regardless of visual field of presentation. In the subject/verb condition, we observed similar response times and accuracies for central and right visual field (RVF) presentations. For left visual field (LVF) presentation, response times were longer and accuracy rates were reduced relative to RVF presentation. An event-related brain potential (ERP) study using the same materials revealed similar ERP responses to the reflexive pronouns in the two visual fields, but very different ERP effects to the subject/verb violations. For lexically marked violations on reflexives, P600 was elicited by stimuli in both the LVF and RVF; for morphologically marked violations on verbs, P600 was elicited only by RVF stimuli. These data suggest that both hemispheres can process lexically marked pronoun agreement violations, and do so in a similar fashion. Morphologically marked subject/verb agreement errors, however, showed a distinct LH advantage. PMID:24326084
American Association for Pediatric Ophthalmology and Strabismus Home About AAPOS Patient Info Resources Allied Health News & Events Meetings J AAPOS American Association for Pediatric Ophthalmology ...
Zhang, Qiong; Shi, Jiannong; Luo, Yuejia; Zhao, Daheng; Yang, Jie
2006-05-15
To investigate the differences in event-related potential parameters related to children's intelligence, we selected 15 individuals from an experimental class of intellectually gifted children and 13 intellectually average children as control to finish three types of visual search tasks (Chinese words, English letters and Arabic numbers). We recorded the electroencephalogram and calculated the peak latencies and amplitudes. Our results suggest comparatively increased P3 amplitudes and shorter P3 latencies in brighter individuals than in less intelligent individuals, but this expected neural efficiency effect interacted with task content. The differences were explained by a more spatially and temporally coordinated neural network for more intelligent children.
Analysis of Actin-Based Intracellular Trafficking in Pollen Tubes.
Jiang, Yuxiang; Zhang, Meng; Huang, Shanjin
2017-01-01
Underlying rapid and directional pollen tube growth is the active intracellular trafficking system that carries materials necessary for cell wall synthesis and membrane expansion to the expanding point of the pollen tube. The actin cytoskeleton has been shown to control various intracellular trafficking events in the pollen tube, but the underlying cellular and molecular mechanisms remain poorly understood. To better understand how the actin cytoskeleton is involved in the regulation of intracellular trafficking events, we need to establish assays to visualize and quantify the distribution and dynamics of organelles, vesicles, or secreted proteins. In this chapter, we introduce methods regarding the visualization and quantification of the distribution and dynamics of organelles or vesicles in pollen tubes.
Reduced Misinformation Effects Following Saccadic Bilateral Eye Movements
ERIC Educational Resources Information Center
Parker, Andrew; Buckley, Sharon; Dagnall, Neil
2009-01-01
The effects of saccadic bilateral (horizontal) eye movements on memory for a visual event narrative were investigated. In the study phase, participants were exposed to a set of pictures accompanied by a verbal commentary describing the events depicted in the pictures. Next, the participants were asked either misleading or control questions about…
Prefrontal Cortex Is Critical for Contextual Processing: Evidence from Brain Lesions
ERIC Educational Resources Information Center
Fogelson, Noa; Shah, Mona; Scabini, Donatella; Knight, Robert T.
2009-01-01
We investigated the role of prefrontal cortex (PFC) in local contextual processing using a combined event-related potentials and lesion approach. Local context was defined as the occurrence of a short predictive series of visual stimuli occurring before delivery of a target event. Targets were preceded by either randomized sequences of standards…
Event-Related fMRI of Category Learning: Differences in Classification and Feedback Networks
ERIC Educational Resources Information Center
Little, Deborah M.; Shin, Silvia S.; Sisco, Shannon M.; Thulborn, Keith R.
2006-01-01
Eighteen healthy young adults underwent event-related (ER) functional magnetic resonance imaging (fMRI) of the brain while performing a visual category learning task. The specific category learning task required subjects to extract the rules that guide classification of quasi-random patterns of dots into categories. Following each classification…
Event Related Brain Potentials and Cognitive Processing: Implications for Navy Training.
ERIC Educational Resources Information Center
Lewis, Gregory W.; And Others
The cognitive styles, aptitudes, and abilities of 50 right-handed subjects were measured through a battery of paper-and-pencil tests to determine the feasibility of using event related brain potentials (ERPs) in the development of adaptive training techniques keyed to the information processing styles of individual students. Visual, auditory, and…
Working Memory Encoding Delays Top-Down Attention to Visual Cortex
ERIC Educational Resources Information Center
Scalf, Paige E.; Dux, Paul E.; Marois, Rene
2011-01-01
The encoding of information from one event into working memory can delay high-level, central decision-making processes for subsequent events [e.g., Jolicoeur, P., & Dell'Acqua, R. The demonstration of short-term consolidation. "Cognitive Psychology, 36", 138-202, 1998, doi:10.1006/cogp.1998.0684]. Working memory, however, is also believed to…