Science.gov

Sample records for 3-stimulus visual oddball

  1. Facilitating masked visual target identification with auditory oddball stimuli.

    PubMed

    Ngo, Mary Kim; Spence, Charles

    2012-08-01

    When identifying a rapidly masked visual target display in a stream of visual distractor displays, a high-frequency tone (presented in synchrony with the target display) in a stream of low-tone distractors results in better performance than when the same low tone accompanies each visual display (Ngo and Spence in Atten Percept Psychophys 72:1938-1947, 2010; Vroomen and de Gelder in J Exp Psychol Hum 26:1583-1590, 2000). In the present study, we tested three oddball conditions: a louder tone presented amongst quieter tones, a quieter tone presented amongst louder tones, and the absence of a tone, within an otherwise identical tone sequence. Across three experiments, all three oddball conditions resulted in the crossmodal facilitation of participants' visual target identification performance. These results therefore suggest that salient oddball stimuli in the form of deviating tones, when synchronized with the target, may be sufficient to capture participants' attention and facilitate visual target identification. The fact that the absence of a sound in an otherwise-regular sequence of tones also facilitated performance suggests that multisensory integration cannot provide an adequate account for the 'freezing' effect. Instead, an attentional capture account is proposed to account for the benefits of oddball cuing in Vroomen and de Gelder's task.

  2. Position but not color deviants result in visual mismatch negativity in an active oddball task.

    PubMed

    Berti, Stefan

    2009-05-06

    Changes in the visual environment might be detected automatically. This function is provided by the sensory systems and showed, for instance, by the pop-out phenomenon. Automatic change detection is also observable within visual oddball paradigms, where rare changes are introduced in an irrelevant stimulus feature; the detection of deviant stimuli is accompanied by a negative component (so-called visual mismatch negativity) in the human event-related brain potential. In this study, the deviating stimulus feature was embedded in a task-relevant object presented in the focus of attention. With this, visual mismatch negativity was observable only with position deviants presented in the upper visual half field but not by lower half field presentation or color deviants.

  3. Dynamic information flow analysis in Vascular Dementia patients during the performance of a visual oddball task.

    PubMed

    Wang, Chao; Xu, Jin; Lou, Wutao; Zhao, Songzhen

    2014-09-19

    This study investigated the information flow in patients with Vascular Dementia (VaD). Twelve VaD patients and twelve age-matched controls participated in the study. EEG signal was recorded when subjects were performing a visual oddball task. Information flow was analyzed between 9 electrodes in frontal, central, and parietal lobes using short-window Directed Transfer Function (sDTF). VaD patients presented a significant decline in the information flow from parietal to frontal and central lobes, compared with the healthy elderly. This decline mainly occurred in delta, theta, and lower alpha bands, from about 200ms to 300ms after target stimulus onset. The findings indicated an impaired parietal-to-frontal and parietal-to-central connectivity in VaD patients, which may be one reason for the cognitive deficits in VaD patients.

  4. Automatic processing of unattended lexical information in visual oddball presentation: neurophysiological evidence

    PubMed Central

    Shtyrov, Yury; Goryainova, Galina; Tugin, Sergei; Ossadtchi, Alexey; Shestakova, Anna

    2013-01-01

    Previous electrophysiological studies of automatic language processing revealed early (100–200 ms) reflections of access to lexical characteristics of speech signal using the so-called mismatch negativity (MMN), a negative ERP deflection elicited by infrequent irregularities in unattended repetitive auditory stimulation. In those studies, lexical processing of spoken stimuli became manifest as an enhanced ERP in response to unattended real words, as opposed to phonologically matched but meaningless pseudoword stimuli. This lexical ERP enhancement was explained by automatic activation of word memory traces realized as distributed strongly intra-connected neuronal circuits, whose robustness guarantees memory trace activation even in the absence of attention on spoken input. Such an account would predict the automatic activation of these memory traces upon any presentation of linguistic information, irrespective of the presentation modality. As previous lexical MMN studies exclusively used auditory stimulation, we here adapted the lexical MMN paradigm to investigate early automatic lexical effects in the visual modality. In a visual oddball sequence, matched short word and pseudoword stimuli were presented tachistoscopically in perifoveal area outside the visual focus of attention, as the subjects' attention was concentrated on a concurrent non-linguistic visual dual task in the center of the screen. Using EEG, we found a visual analogue of the lexical ERP enhancement effect, with unattended written words producing larger brain response amplitudes than matched pseudowords, starting at ~100 ms. Furthermore, we also found significant visual MMN, reported here for the first time for unattended perifoveal lexical stimuli. The data suggest early automatic lexical processing of visually presented language which commences rapidly and can take place outside the focus of attention. PMID:23950740

  5. Widespread cortical α-ERD accompanying visual oddball target stimuli is frequency but non-modality specific.

    PubMed

    Peng, Weiwei; Hu, Yong; Mao, Yanhui; Babiloni, Claudio

    2015-12-15

    Previous studies have shown that alpha event-related desynchronization (α-ERD) is associated with reaction to visual stimuli in oddball paradigm, as a reflection of attention allocation and memory updating. The present study tested the hypothesis that it reflects a modality and/or frequency specific mechanism. Electroencephalography (EEG) recordings (64 channels) were performed on 18 healthy subjects during visual, auditory, somatosensory, and pain oddball paradigms. Low- and high-frequency α-rhythm were analyzed on individual basis, and their sources were estimated by low resolution brain electromagnetic tomography (LORETA). α-ERD, served as an index of cortical activation, was computed on the cortical voxel level and compared across the conditions (target vs. non-target), α sub-bands (lower vs. higher frequency), and modalities (visual, auditory, somatosensory, and pain). The results showed that visual α-ERD was mainly generated from occipital cortex for both target and non-target conditions. Its magnitude was enhanced across widespread cortical regions (e.g., bilateral occipital, parietal, and frontal areas) in the target condition and was greater in high-frequency α-band. Finally, α-ERD difference between target and non-target conditions was not higher in visual than that in other control modalities. All these findings indicated that human high-frequency α-ERD reflects cognitive attention processes underlying reaction to oddball target stimuli regardless of stimulus modality.

  6. Neural correlates of emotional intelligence in a visual emotional oddball task: an ERP study.

    PubMed

    Raz, Sivan; Dan, Orrie; Zysberg, Leehu

    2014-11-01

    The present study was aimed at identifying potential behavioral and neural correlates of Emotional Intelligence (EI) by using scalp-recorded Event-Related Potentials (ERPs). EI levels were defined according to both self-report questionnaire and a performance-based ability test. We identified ERP correlates of emotional processing by using a visual-emotional oddball paradigm, in which subjects were confronted with one frequent standard stimulus (a neutral face) and two deviant stimuli (a happy and an angry face). The effects of these faces were then compared across groups with low and high EI levels. The ERP results indicate that participants with high EI exhibited significantly greater mean amplitudes of the P1, P2, N2, and P3 ERP components in response to emotional and neutral faces, at frontal, posterior-parietal and occipital scalp locations. P1, P2 and N2 are considered indexes of attention-related processes and have been associated with early attention to emotional stimuli. The later P3 component has been thought to reflect more elaborative, top-down, emotional information processing including emotional evaluation and memory encoding and formation. These results may suggest greater recruitment of resources to process all emotional and non-emotional faces at early and late processing stages among individuals with higher EI. The present study underscores the usefulness of ERP methodology as a sensitive measure for the study of emotional stimuli processing in the research field of EI.

  7. Evoked gamma band response in male adolescent subjects at high risk for alcoholism during a visual oddball task.

    PubMed

    Padmanabhapillai, Ajayan; Tang, Yongqiang; Ranganathan, Mohini; Rangaswamy, Madhavi; Jones, Kevin A; Chorlian, David B; Kamarajan, Chella; Stimus, Arthur; Kuperman, Samuel; Rohrbaugh, John; O'Connor, Sean J; Bauer, Lance O; Schuckit, Marc A; Begleiter, Henri; Porjesz, Bernice

    2006-11-01

    This study investigates early evoked gamma band activity in male adolescent subjects at high risk for alcoholism (HR; n=68) and normal controls (LR; n=27) during a visual oddball task. A time-frequency representation method was applied to EEG data in order to obtain stimulus related early evoked (phase-locked) gamma band activity (29-45 Hz) and was analyzed within a 0-150 ms time window range. Significant reduction of the early evoked gamma band response in the frontal and parietal regions during target stimulus processing was observed in HR subjects compared to LR subjects. Additionally, the HR group showed less differentiation between target and non-target stimuli in both frontal and parietal regions compared to the LR group, indicating difficulty in early stimulus processing, probably due to a dysfunctional frontoparietal attentional network. The results indicate that the deficient early evoked gamma band response may precede the development of alcoholism and could be a potential endophenotypic marker of alcoholism risk.

  8. The enhanced processing of visual novel events in females: ERP correlates from two modified three-stimulus oddball tasks.

    PubMed

    Yuan, Jiajin; Xu, Shuang; Li, Chengqiang; Yang, Jiemin; Li, Hong; Yuan, Yin; Huang, Yu

    2012-02-09

    The ability to detect and cope with unpredictable novel events is fundamental for adapting to a rapidly changing environment and ensuring the survival of the organism. Despite knowledge of gender differences in emotional processing, little is currently known about the impact of gender on neural processing of emotion-irrelevant, novel stimuli. Using two modified three-stimulus oddball tasks and event-related potentials (ERPs), the present study investigated the impact of sex on brain processing of novel events and the associated neurophysiological correlates. With novel and non-novel control stimuli used as task-irrelevant distracters, Experiment 1 showed higher novelty rating scores and larger size of novelty effects in brain potentials at 200-300 ms and 300-430 ms time intervals in females compared to males. After excluding the contribution of stimulus probability, Experiment 2 continued to display significant novelty effects in the response times and the amplitudes of the 130-500 ms time windows. Most importantly, females displayed a sustained novelty effect in the late positive component (LPC) amplitudes of the 500-600 ms interval, which was not observed in males. Therefore, Experiment 1 and 2 demonstrated that females are equipped with enhanced brain processing of emotion-irrelevant, novel stimuli. This phenomenon is independent of the established gender difference in infrequent stimulus processing. We suggest that our findings reflect the differential adaptive demands on females and males during evolution.

  9. Nicotine effects on brain function during a visual oddball task: a comparison between conventional and EEG-informed fMRI analysis.

    PubMed

    Warbrick, Tracy; Mobascher, Arian; Brinkmeyer, Jürgen; Musso, Francesco; Stoecker, Tony; Shah, N Jon; Fink, Gereon R; Winterer, Georg

    2012-08-01

    In a previous oddball task study, it was shown that the inclusion of electrophysiology (EEG), that is, single-trial P3 ERP parameters, in the analysis of fMRI responses can detect activation that is not apparent with conventional fMRI data modeling strategies [Warbrick, T., Mobascher, A., Brinkmeyer, J., Musso, F., Richter, N., Stoecker, T., et al. Single-trial P3 amplitude and latency informed event-related fMRI models yield different BOLD response patterns to a target detection task. Neuroimage, 47, 1532-1544, 2009]. Given that P3 is modulated by nicotine, including P3 parameters in the fMRI analysis might provide additional information about nicotine effects on brain function. A 1-mg nasal nicotine spray (0.5 mg each nostril) or placebo (pepper) spray was administered in a double-blind, placebo-controlled, within-subject, randomized, cross-over design. Simultaneous EEG-fMRI and behavioral data were recorded from 19 current smokers in response to an oddball-type visual choice RT task. Conventional general linear model analysis and single-trial P3 amplitude informed general linear model analysis of the fMRI data were performed. Comparing the nicotine with the placebo condition, reduced RTs in the nicotine condition were related to decreased BOLD responses in the conventional analysis encompassing the superior parietal lobule, the precuneus, and the lateral occipital cortex. On the other hand, reduced RTs were related to increased BOLD responses in the precentral and postcentral gyri, and ACC in the EEG-informed fMRI analysis. Our results show how integrated analyses of simultaneous EEG-fMRI data can be used to detect nicotine effects that would not have been revealed through conventional analysis of either measure in isolation. This emphasizes the significance of applying multimodal imaging methods to pharmacoimaging.

  10. Decomposing delta, theta, and alpha time–frequency ERP activity from a visual oddball task using PCA

    PubMed Central

    Bernat, Edward M.; Malone, Stephen M.; Williams, William J.; Patrick, Christopher J.; Iacono, William G.

    2008-01-01

    Objective Time–frequency (TF) analysis has become an important tool for assessing electrical and magnetic brain activity from event-related paradigms. In electrical potential data, theta and delta activities have been shown to underlie P300 activity, and alpha has been shown to be inhibited during P300 activity. Measures of delta, theta, and alpha activity are commonly taken from TF surfaces. However, methods for extracting relevant activity do not commonly go beyond taking means of windows on the surface, analogous to measuring activity within a defined P300 window in time-only signal representations. The current objective was to use a data driven method to derive relevant TF components from event-related potential data from a large number of participants in an oddball paradigm. Methods A recently developed PCA approach was employed to extract TF components [Bernat, E. M., Williams, W. J., and Gehring, W. J. (2005). Decomposing ERP time-frequency energy using PCA. Clin Neurophysiol, 116(6), 1314–1334] from an ERP dataset of 2068 17 year olds (979 males). TF activity was taken from both individual trials and condition averages. Activity including frequencies ranging from 0 to 14 Hz and time ranging from stimulus onset to 1312.5 ms were decomposed. Results A coordinated set of time–frequency events was apparent across the decompositions. Similar TF components representing earlier theta followed by delta were extracted from both individual trials and averaged data. Alpha activity, as predicted, was apparent only when time–frequency surfaces were generated from trial level data, and was characterized by a reduction during the P300. Conclusions Theta, delta, and alpha activities were extracted with predictable time-courses. Notably, this approach was effective at characterizing data from a single-electrode. Finally, decomposition of TF data generated from individual trials and condition averages produced similar results, but with predictable differences

  11. The auditory P3 from passive and active three-stimulus oddball paradigm.

    PubMed

    Wronka, Eligiusz; Kaiser, Jan; Coenen, Anton M L

    2008-01-01

    The aim of this study was the comparison of basic characteristics of the P3 subcomponents elicited in passive and active versions of the auditory oddball paradigm. A 3-stimulus oddball paradigm was employed in which subjects were presented with random sequence of tones while they performed a discrimination task in visual modality with no response to the tone (passive task) or responded to an infrequently occurring target stimulus inserted into sequence of frequent standard and rare non-target stimuli (active task). Results show that the magnitude of the frontal P3 response is determined by the relative perceptual distinctiveness among stimuli. The amplitude of frontal component is larger for the stimuli more deviated from the standard in both passive and active tasks. In all cases however, a maximum over central or fronto-central scalp regions was demonstrated. Moreover, amplitude of this component was influenced by the strength of attentional focus--a significantly larger response was obtained in the active session than in its passive counterpart. The apparent parietal P3 responses were obtained only in the active condition. The amplitude of this component is larger for the target than the non-target across all electrode sites, but both demonstrated a parietal maxima. This findings suggest that generation of early frontal P3 could be related to alerting activity of frontal cortex irrespective of stimulus context, while generation of later parietal P3 is related to temporo-parietal network activated when neuronal model of perceived stimulation and attentional trace are comparing.

  12. Oddballs and a Low Odderon Intercept

    SciTech Connect

    Llanes-Estrada, Felipe J.; Bicudo, Pedro; Cotanch, Stephen R.; /North Carolina State U.

    2005-07-27

    The authors report an odderon Regge trajectory emerging from a field theoretical Coulomb gauge QCD model for the odd signature J{sup PC} (P = C = -1) glueball states (oddballs). The trajectory intercept is clearly smaller than the pomeron and even the {omega} trajectory's intercept which provides an explanation for the nonobservation of the odderon in high energy scattering data. To further support this result we compare to glueball lattice data and also perform calculations with an alternative model based upon an exact Hamiltonian diagonalization for three constituent gluons.

  13. Do Rare Stimuli Evoke Large P3s by Being Unexpected? A Comparison of Oddball Effects Between Standard-Oddball and Prediction-Oddball Tasks

    PubMed Central

    Verleger, Rolf; Śmigasiewicz, Kamila

    2016-01-01

    The P3 component of event-related potentials increases when stimuli are rarely presented. It has been assumed that this oddball effect (rare-frequent difference) reflects the unexpectedness of rare stimuli. The assumption of unexpectedness and its link to P3 amplitude were tested here. A standard- oddball task requiring alternative key-press responses to frequent and rare stimuli was compared with an oddball-prediction task where stimuli had to be first predicted and then confirmed by key-pressing. Oddball effects in the prediction task depended on whether the frequent or the rare stimulus had been predicted. Oddball effects on P3 amplitudes and error rates in the standard oddball task closely resembled effects after frequent predictions. This corroborates the notion that these effects occur because frequent stimuli are expected and rare stimuli are unexpected. However, a closer look at the prediction task put this notion into doubt because the modifications of oddball effects on P3 by expectancies were entirely due to effects on frequent stimuli, whereas the large P3 amplitudes evoked by rare stimuli were insensitive to predictions (unlike response times and error rates). Therefore, rare stimuli cannot be said to evoke large P3 amplitudes because they are unexpected. We discuss these diverging effects of frequency and expectancy, as well as general differences between tasks, with respect to concepts and hypotheses about P3b’s function and conclude that each discussed concept or hypothesis encounters some problems, with a conception in terms of subjective relevance assigned to stimuli offering the most consistent account of these basic effects. PMID:27512527

  14. Relationship of P3b single-trial latencies and response times in one, two, and three-stimulus oddball tasks.

    PubMed

    Walsh, Matthew M; Gunzelmann, Glenn; Anderson, John R

    2017-02-01

    The P300 is one of the most widely studied components of the human event-related potential. According to a longstanding view, the P300, and particularly its posterior subcomponent (i.e., the P3b), is driven by stimulus categorization. Whether the P3b relates to tactical processes involved in immediate responding or strategic processes that affect future behavior remains controversial, however. It is difficult to determine whether variability in P3b latencies relates to variability in response times because of limitations in the methods currently available to quantify the latency of the P3b during single trials. In this paper, we report results from the Psychomotor Vigilance Task (PVT), the Hitchcock Radar Task, and a 3-Stimulus Oddball Task. These represent variants of the one-, two-, and three-stimulus oddball paradigms commonly used to study the P3b. The PVT requires simple detection, whereas the Hitchcock Radar Task and the 3-Stimulus Task require detection and categorization. We apply a novel technique that combines hidden semi-Markov models and multi-voxel pattern analysis (HSMM-MVPA) to data from the three experiments. HSMM-MVPA revealed a processing stage in each task corresponding to the P3b. Trial-by-trial variability in the latency of the processing stage correlated with response times in the Hitchcock Radar Task and the 3-Stimulus Task, but not the PVT. These results indicate that the P3b reflects a stimulus categorization process, and that its latency is strongly associated with response times when the stimulus must be categorized before responding. In addition to those theoretical insights, the ability to detect the onset of the P3b and other components on a single-trial basis using HSMM-MVPA opens the door for new uses of mental chronometry in cognitive neuroscience.

  15. Methodological Considerations about the Use of Bimodal Oddball P300 in Psychiatry: Topography and Reference Effect

    PubMed Central

    Schröder, Elisa; Kajosch, Hendrik; Verbanck, Paul; Kornreich, Charles; Campanella, Salvatore

    2016-01-01

    Event-related potentials (ERPs) bimodal oddball task has disclosed increased sensitivity to show P300 modulations to subclinical symptoms. Even if the utility of such a procedure has still to be confirmed at a clinical level, gathering normative values of this new oddball variant may be of the greatest interest. We specifically addressed the challenge of defining the best location for the recording of P3a and P3b components and selecting the best reference to use by investigating the effect of an offline re-reference procedure on recorded bimodal P3a and P3b. Forty young and healthy subjects were submitted to a bimodal (synchronized and always congruent visual and auditory stimuli) three-stimulus oddball task in which 140 frequent bimodal stimuli, 30 deviant “target” stimuli and 30 distractors were presented. Task consisted in clicking as soon as possible on the targets, and not paying attention to frequent stimuli and distractors. This procedure allowed us to record, for each individual, the P3a component, referring to the novelty process related to distractors processing, and the P3b component, linked to the processing of the target stimuli. Results showed that both P3a and P3b showed maximal amplitude in Pz. However, P3a displayed a more central distribution. Nose reference was also shown to give maximal amplitudes compared with average and linked mastoids references. These data were discussed in light of the necessity to develop multi-site recording guidelines to furnish sets of ERPs data comparable across laboratories. PMID:27708597

  16. Selective adaptation to "oddball" sounds by the human auditory system.

    PubMed

    Simpson, Andrew J R; Harper, Nicol S; Reiss, Joshua D; McAlpine, David

    2014-01-29

    Adaptation to both common and rare sounds has been independently reported in neurophysiological studies using probabilistic stimulus paradigms in small mammals. However, the apparent sensitivity of the mammalian auditory system to the statistics of incoming sound has not yet been generalized to task-related human auditory perception. Here, we show that human listeners selectively adapt to novel sounds within scenes unfolding over minutes. Listeners' performance in an auditory discrimination task remains steady for the most common elements within the scene but, after the first minute, performance improves for distinct and rare (oddball) sound elements, at the expense of rare sounds that are relatively less distinct. Our data provide the first evidence of enhanced coding of oddball sounds in a human auditory discrimination task and suggest the existence of an adaptive mechanism that tracks the long-term statistics of sounds and deploys coding resources accordingly.

  17. Sequential search asymmetry: Behavioral and psychophysiological evidence from a dual oddball task.

    PubMed

    Blundon, Elizabeth G; Rumak, Samuel P; Ward, Lawrence M

    2017-01-01

    We conducted five experiments in order to explore the generalizability of a new type of search asymmetry, which we have termed sequential search asymmetry, across sensory modalities, and to better understand its origin. In all five experiments rare oddballs occurred randomly within longer sequences of more frequent standards. Oddballs and standards all consisted of rapidly-presented runs of five pure tones (Experiments 1 and 5) or five colored annuli (Experiments 2 through 4) somewhat analogous to simultaneously-presented feature-present and feature-absent stimuli in typical visual search tasks. In easy tasks feature-present reaction times and P300 latencies were shorter than feature-absent ones, similar to findings in search tasks with simultaneously-presented stimuli. Moreover the P3a subcomponent of the P300 ERP was strongly apparent only in the feature-present condition. In more difficult tasks requiring focused attention, however, RT and P300 latency differences disappeared but the P300 amplitude difference was significant. Importantly in all five experiments d' for feature-present targets was larger than that for feature-absent targets. These results imply that sequential search asymmetry arises from discriminability differences between feature-present and feature-absent targets. Response time and P300 latency differences can be attributed to the use of different attention strategies in search for feature-present and feature-absent targets, indexed by the presence of a dominant P3a subcomponent in the feature-present target-evoked P300s that is lacking in the P300s to the feature-absent targets.

  18. Sequential search asymmetry: Behavioral and psychophysiological evidence from a dual oddball task

    PubMed Central

    Blundon, Elizabeth G.; Rumak, Samuel P.

    2017-01-01

    We conducted five experiments in order to explore the generalizability of a new type of search asymmetry, which we have termed sequential search asymmetry, across sensory modalities, and to better understand its origin. In all five experiments rare oddballs occurred randomly within longer sequences of more frequent standards. Oddballs and standards all consisted of rapidly-presented runs of five pure tones (Experiments 1 and 5) or five colored annuli (Experiments 2 through 4) somewhat analogous to simultaneously-presented feature-present and feature-absent stimuli in typical visual search tasks. In easy tasks feature-present reaction times and P300 latencies were shorter than feature-absent ones, similar to findings in search tasks with simultaneously-presented stimuli. Moreover the P3a subcomponent of the P300 ERP was strongly apparent only in the feature-present condition. In more difficult tasks requiring focused attention, however, RT and P300 latency differences disappeared but the P300 amplitude difference was significant. Importantly in all five experiments d’ for feature-present targets was larger than that for feature-absent targets. These results imply that sequential search asymmetry arises from discriminability differences between feature-present and feature-absent targets. Response time and P300 latency differences can be attributed to the use of different attention strategies in search for feature-present and feature-absent targets, indexed by the presence of a dominant P3a subcomponent in the feature-present target-evoked P300s that is lacking in the P300s to the feature-absent targets. PMID:28278202

  19. Electrical mapping in bipolar disorder patients during the oddball paradigm.

    PubMed

    Di Giorgio Silva, Luiza Wanick; Cartier, Consuelo; Cheniaux, Elie; Novis, Fernanda; Silveira, Luciana Angélica; Cavaco, Paola Anaquim; de Assis da Silva, Rafael; Batista, Washington Adolfo; Tanaka, Guaraci Ken; Gongora, Mariana; Bittencourt, Juliana; Teixeira, Silmar; Basile, Luis Fernando; Budde, Henning; Cagy, Mauricio; Ribeiro, Pedro; Velasques, Bruna

    2016-01-01

    Bipolar disorder (BD) is characterized by an alternated occurrence between acute mania episodes and depression or remission moments. The objective of this study is to analyze the information processing changes in BP (Bipolar Patients) (euthymia, depression and mania) during the oddball paradigm, focusing on the P300 component, an electric potential of the cerebral cortex generated in response to external sensorial stimuli, which involves more complex neurophysiological processes related to stimulus interpretation. Twenty-eight bipolar disorder patients (BP) (17 women and 11 men with average age of 32.5, SD: 9.5) and eleven healthy controls (HC) (7 women and 4 men with average age of 29.78, SD: 6.89) were enrolled in this study. The bipolar patients were divided into 3 major groups (i.e., euthymic, depressive and maniac) according to the score on the Clinical Global Impression--Bipolar Version (CGI-BP). The subjects performed the oddball paradigm simultaneously to the EEG record. EEG data were also recorded before and after the execution of the task. A one-way ANOVA was applied to compare the P300 component among the groups. After observing P300 and the subcomponents P3a and P3b, a similarity of amplitude and latency between euthymic and depressive patients was observed, as well as small amplitude in the pre-frontal cortex and reduced P3a response. This can be evidence of impaired information processing, cognitive flexibility, working memory, executive functions and ability to shift the attention and processing to the target and away from distracting stimuli in BD. Such neuropsychological impairments are related to different BD symptoms, which should be known and considered, in order to develop effective clinical treatment strategies.

  20. Target and Non-Target Processing during Oddball and Cyberball: A Comparative Event-Related Potential Study

    PubMed Central

    Weschke, Sarah; Niedeggen, Michael

    2016-01-01

    The phenomenon of social exclusion can be investigated by using a virtual ball-tossing game called Cyberball. In neuroimaging studies, structures have been identified which are activated during social exclusion. But to date the underlying mechanisms are not fully disclosed. In previous electrophysiological studies it was shown that the P3 complex is sensitive to exclusion manipulations in the Cyberball paradigm and that there is a correlation between P3 amplitude and self-reported social pain. Since this posterior event-related potential (ERP) was widely investigated using the oddball paradigm, we directly compared the ERP effects elicited by the target (Cyberball: “ball possession”) and non-target (Cyberball: “ball possession of a co-player) events in both paradigms. Analyses mainly focused on the effect of altered stimulus probabilities of the target and non-target events between two consecutive blocks of the tasks. In the first block, the probability of the target and non-target event was 33% (Cyberball: inclusion), in the second block target probability was reduced to 17%, and accordingly, non-target probability was increased to 66% (Cyberball: exclusion). Our results indicate that ERP amplitude differences between inclusion and exclusion are comparable to ERP amplitude effects in a visual oddball task. We therefore suggest that ERP effects–especially in the P3 range–in the Oddball and Cyberball paradigm rely on similar mechanisms, namely the probability of target and non-target events. Since the simulation of social exclusion (Cyberball) did not trigger a unique ERP response, the idea of an exclusion-specific neural alarm system is not supported. The limitations of an ERP-based approach will be discussed. PMID:27100787

  1. Diminished n1 auditory evoked potentials to oddball stimuli in misophonia patients.

    PubMed

    Schröder, Arjan; van Diepen, Rosanne; Mazaheri, Ali; Petropoulos-Petalas, Diamantis; Soto de Amesti, Vicente; Vulink, Nienke; Denys, Damiaan

    2014-01-01

    Misophonia (hatred of sound) is a newly defined psychiatric condition in which ordinary human sounds, such as breathing and eating, trigger impulsive aggression. In the current study, we investigated if a dysfunction in the brain's early auditory processing system could be present in misophonia. We screened 20 patients with misophonia with the diagnostic criteria for misophonia, and 14 matched healthy controls without misophonia, and investigated any potential deficits in auditory processing of misophonia patients using auditory event-related potentials (ERPs) during an oddball task. Subjects watched a neutral silent movie while being presented a regular frequency of beep sounds in which oddball tones of 250 and 4000 Hz were randomly embedded in a stream of repeated 1000 Hz standard tones. We examined the P1, N1, and P2 components locked to the onset of the tones. For misophonia patients, the N1 peak evoked by the oddball tones had smaller mean peak amplitude than the control group. However, no significant differences were found in P1 and P2 components evoked by the oddball tones. There were no significant differences between the misophonia patients and their controls in any of the ERP components to the standard tones. The diminished N1 component to oddball tones in misophonia patients suggests an underlying neurobiological deficit in misophonia patients. This reduction might reflect a basic impairment in auditory processing in misophonia patients.

  2. The Cognitive Locus of Distraction by Acoustic Novelty in the Cross-Modal Oddball Task

    ERIC Educational Resources Information Center

    Parmentier, Fabrice B. R.; Elford, Gregory; Escera, Carles; Andres, Pilar; San Miguel, Iria

    2008-01-01

    Unexpected stimuli are often able to distract us away from a task at hand. The present study seeks to explore some of the mechanisms underpinning this phenomenon. Studies of involuntary attention capture using the oddball task have repeatedly shown that infrequent auditory changes in a series of otherwise repeating sounds trigger an automatic…

  3. Lifespan Differences in Nonlinear Dynamics during Rest and Auditory Oddball Performance

    ERIC Educational Resources Information Center

    Muller, Viktor; Lindenberger, Ulman

    2012-01-01

    Electroencephalographic recordings (EEG) were used to assess age-associated differences in nonlinear brain dynamics during both rest and auditory oddball performance in children aged 9.0-12.8 years, younger adults, and older adults. We computed nonlinear coupling dynamics and dimensional complexity, and also determined spectral alpha power as an…

  4. Aspects of psychodynamic neuropsychiatry I: episodic memory, transference, and the oddball paradigm.

    PubMed

    Brockman, Richard

    2010-01-01

    Psychotherapy has, since the time of Freud, focused on the unconscious and dynamically repressed memory. This article explores a therapy where the focus is on what is known, on episodic memory. Episodic memory, along with semantic memory, is part of the declarative memory system. Episodic memory depends on frontal, parietal, as well as temporal lobe function. It is the system related to the encoding and recall of context-rich memory. While memory usually decays with time, powerfully encoded episodic memory may augment. This article explores the hypothesis that such augmentation is the result of conditioning and kindling. Augmented memory could lead to a powerful "top-down" focus of attention-such that one would perceive only what one had set out to perceive. The "oddball paradigm" is suggested as a route out of such a self-perpetuating system. A clinical example (a disguised composite of several clinical histories) is used to demonstrate how such an intensification of memory and attention came about as a result of the transference, and how the "oddball paradigm" was used as a way out of what had become a treatment stalemate.

  5. A comparative study of event-related coupling patterns during an auditory oddball task in schizophrenia

    NASA Astrophysics Data System (ADS)

    Bachiller, Alejandro; Poza, Jesús; Gómez, Carlos; Molina, Vicente; Suazo, Vanessa; Hornero, Roberto

    2015-02-01

    Objective. The aim of this research is to explore the coupling patterns of brain dynamics during an auditory oddball task in schizophrenia (SCH). Approach. Event-related electroencephalographic (ERP) activity was recorded from 20 SCH patients and 20 healthy controls. The coupling changes between auditory response and pre-stimulus baseline were calculated in conventional EEG frequency bands (theta, alpha, beta-1, beta-2 and gamma), using three coupling measures: coherence, phase-locking value and Euclidean distance. Main results. Our results showed a statistically significant increase from baseline to response in theta coupling and a statistically significant decrease in beta-2 coupling in controls. No statistically significant changes were observed in SCH patients. Significance. Our findings support the aberrant salience hypothesis, since SCH patients failed to change their coupling dynamics between stimulus response and baseline when performing an auditory cognitive task. This result may reflect an impaired communication among neural areas, which may be related to abnormal cognitive functions.

  6. Psychopathy, attention, and oddball target detection: New insights from PCL-R facet scores.

    PubMed

    Anderson, Nathaniel E; Steele, Vaughn R; Maurer, J Michael; Bernat, Edward M; Kiehl, Kent A

    2015-09-01

    Psychopathy is a disorder accompanied by cognitive deficits including abnormalities in attention. Prior studies examining cognitive features of psychopaths using ERPs have produced some inconsistent results. We examined psychopathy-related differences in ERPs during an auditory oddball task in a sample of incarcerated adult males. We extend previous work by deriving ERPs with principal component analysis (PCA) and relate these to the four facets of Hare's Psychopathy Checklist Revised (PCL-R). Features of psychopathy were associated with increased target N1 amplitude (facets 1, 4), decreased target P3 amplitude (facet 1), and reduced slow wave amplitude for frequent standard stimuli (facets 1, 3, 4). We conclude that employing PCA and examining PCL-R facets improve sensitivity and help clarify previously reported associations. Furthermore, attenuated slow wave during standards may be a novel marker for psychopaths' abnormalities in attention.

  7. Disruptions in small-world cortical functional connectivity network during an auditory oddball paradigm task in patients with schizophrenia.

    PubMed

    Shim, Miseon; Kim, Do-Won; Lee, Seung-Hwan; Im, Chang-Hwan

    2014-07-01

    P300 deficits in patients with schizophrenia have previously been investigated using EEGs recorded during auditory oddball tasks. However, small-world cortical functional networks during auditory oddball tasks and their relationships with symptom severity scores in schizophrenia have not yet been investigated. In this study, the small-world characteristics of source-level functional connectivity networks of EEG responses elicited by an auditory oddball paradigm were evaluated using two representative graph-theoretical measures, clustering coefficient and path length. EEG signals from 34 patients with schizophrenia and 34 healthy controls were recorded while each subject was asked to attend to oddball tones. The results showed reduced clustering coefficients and increased path lengths in patients with schizophrenia, suggesting that the small-world functional network is disrupted in patients with schizophrenia. In addition, the negative and cognitive symptom components of positive and negative symptom scales were negatively correlated with the clustering coefficient and positively correlated with path length, demonstrating that both indices are indicators of symptom severity in patients with schizophrenia. Our study results suggest that disrupted small-world characteristics are potential biomarkers for patients with schizophrenia.

  8. Prediction of Auditory and Visual P300 Brain-Computer Interface Aptitude

    PubMed Central

    Halder, Sebastian; Hammer, Eva Maria; Kleih, Sonja Claudia; Bogdan, Martin; Rosenstiel, Wolfgang; Birbaumer, Niels; Kübler, Andrea

    2013-01-01

    Objective Brain-computer interfaces (BCIs) provide a non-muscular communication channel for patients with late-stage motoneuron disease (e.g., amyotrophic lateral sclerosis (ALS)) or otherwise motor impaired people and are also used for motor rehabilitation in chronic stroke. Differences in the ability to use a BCI vary from person to person and from session to session. A reliable predictor of aptitude would allow for the selection of suitable BCI paradigms. For this reason, we investigated whether P300 BCI aptitude could be predicted from a short experiment with a standard auditory oddball. Methods Forty healthy participants performed an electroencephalography (EEG) based visual and auditory P300-BCI spelling task in a single session. In addition, prior to each session an auditory oddball was presented. Features extracted from the auditory oddball were analyzed with respect to predictive power for BCI aptitude. Results Correlation between auditory oddball response and P300 BCI accuracy revealed a strong relationship between accuracy and N2 amplitude and the amplitude of a late ERP component between 400 and 600 ms. Interestingly, the P3 amplitude of the auditory oddball response was not correlated with accuracy. Conclusions Event-related potentials recorded during a standard auditory oddball session moderately predict aptitude in an audiory and highly in a visual P300 BCI. The predictor will allow for faster paradigm selection. Significance Our method will reduce strain on patients because unsuccessful training may be avoided, provided the results can be generalized to the patient population. PMID:23457444

  9. Event-related delta, theta, alpha and gamma correlates to auditory oddball processing during Vipassana meditation.

    PubMed

    Cahn, B Rael; Delorme, Arnaud; Polich, John

    2013-01-01

    Long-term Vipassana meditators sat in meditation vs. a control (instructed mind wandering) states for 25 min, electroencephalography (EEG) was recorded and condition order counterbalanced. For the last 4 min, a three-stimulus auditory oddball series was presented during both meditation and control periods through headphones and no task imposed. Time-frequency analysis demonstrated that meditation relative to the control condition evinced decreased evoked delta (2-4 Hz) power to distracter stimuli concomitantly with a greater event-related reduction of late (500-900 ms) alpha-1 (8-10 Hz) activity, which indexed altered dynamics of attentional engagement to distracters. Additionally, standard stimuli were associated with increased early event-related alpha phase synchrony (inter-trial coherence) and evoked theta (4-8 Hz) phase synchrony, suggesting enhanced processing of the habituated standard background stimuli. Finally, during meditation, there was a greater differential early-evoked gamma power to the different stimulus classes. Correlation analysis indicated that this effect stemmed from a meditation state-related increase in early distracter-evoked gamma power and phase synchrony specific to longer-term expert practitioners. The findings suggest that Vipassana meditation evokes a brain state of enhanced perceptual clarity and decreased automated reactivity.

  10. Stimulus intensity effects on P300 amplitude across repetitions of a standard auditory oddball task.

    PubMed

    Lindín, Mónica; Zurrón, Montserrat; Díaz, Fernando

    2005-07-01

    An evaluation was made of whether stimulus intensity affects changes of P300 amplitude in response to repeated presentation of the target stimulus in a standard auditory oddball task. P300 latency values were also evaluated. Three samples were selected, one for each intensity used: 65, 85 and 105 dB SPL (sound pressure level). Five hundred tones (5 subblocks, 100 tones each) were presented. P300 amplitude (1) increased from Fz to Pz, (2) was larger at 105 than 65 or 85 dB SPL, (3) increased from the first to second subblock and decreased from the second subblock onwards at the three intensities, replicating our previous findings at 85 dB SPL and demonstrating a consistent phenomenon, and (4) at 105 dB SPL, the decrease was less pronounced, which we attribute to the more intense stimuli capturing the attention in a sustained manner during the task and interfering with the possible automation of the context-updating process.

  11. First love does not die: a sustaining primacy effect on ERP components in an oddball paradigm.

    PubMed

    Kotchoubey, Boris

    2014-03-27

    Both primacy and frequency factors belong to very powerful regulators of human cognition and behavior, but their relationship is only scarcely investigated. This study aimed to investigate the interplay of primacy and frequency effects on behavioral and electrophysiological (event-related potential, ERP) measures using an oddball paradigm. In each experiment 234 frequent (standard) and 66 rare (deviant) harmonic tones were presented. Participants either responded to stimuli with a button press (motor experiment) or counted the rare stimulus (counting experiment). Each experiment entailed two counterbalanced conditions. In the "classical" condition both standards and deviants were equally distributed across the presentation series, while in the "primacy" condition more deviants were concentrated at the beginning of the series. In the motor experiment no differences between the two conditions were obtained at the behavioral level, but the amplitude of N2 to deviants was significantly larger in the classical than primacy condition, and the same trend was obtained for the P3 component at lateral posterior sites. In the counting experiment both N2b and P3 effects were strongly reduced in the primacy condition as compared with the classical condition. Therefore, stimuli that were frequently presented in the first stimulation run were subsequently processed as "less rare", although in fact they were even rarer than in the control condition. The data indicate that the initial pattern of stimulation can substantially affect the frequency effect during the processing of subsequent stimuli.

  12. Genetic Determinants of Target and Novelty Related Event-related Potentials in the Auditory Oddball Response

    PubMed Central

    Liu, Jingyu; Kiehl, Kent A.; Pearlson, Godfrey; Perrone-Bizzozero, Nora I.; Eichele, Tom; Calhoun, Vince D.

    2009-01-01

    Processing of novel and target stimuli in the auditory target detection or ‘oddball’ task encompasses the chronometry of perception, attention and working memory and is reflected in scalp recorded event-related potentials (ERPs). A variety of ERP components related to target and novelty processing have been described and extensively studied, and linked to deficits of cognitive processing. However, little is known about associations of genotypes with ERP endophenotypes. Here we sought to elucidate the genetic underpinnings of auditory oddball ERP components using a novel data analysis technique. A parallel independent component analysis of the electrophysiology and single nucleotide polymorphism (SNP) data was used to extract relations between patterns of ERP components and SNP associations purely based on an analysis incorporating higher order statistics. The method allows for broader associations of genotypes with phenotypes than traditional hypothesis-driven univariate correlational analyses. We show that target detection and processing of novel stimuli are both associated with a shared cluster of genes linked to the adrenergic and dopaminergic pathways. These results provide evidence of genetic influences on normal patterns of ERP generation during auditory target detection and novelty processing at the SNP association level. PMID:19285141

  13. Reduction in event-related alpha attenuation during performance of an auditory oddball task in schizophrenia.

    PubMed

    Higashima, Masato; Tsukada, Takahiro; Nagasawa, Tatsuya; Oka, Takashi; Okamoto, Takeshi; Okamoto, Yoko; Koshino, Yoshifumi

    2007-08-01

    EEG frequency-domain analyses have demonstrated that cognitive performance produces a reduction in alpha activity, i.e., alpha attenuation, such as event-related desynchronization (ERD), reflecting brain activation. To examine whether schizophrenic patients have abnormalities in frequency-domain, event-related alpha attenuation, as well as in time-domain EEG phenomena, such as event-related potential, we compared alpha power change and P300 elicited simultaneously in response to the presentation of target tones in an auditory oddball paradigm between patients with schizophrenia and normal control subjects. In both patients and controls, alpha power was smaller during the time window of 512 ms following targets than following non-targets, particularly at the parietal and the posterior temporal locations (Pz, T5, and T6). The size of alpha attenuation measured as percent reduction in alpha power produced by targets relative to non-targets was smaller in patients than in controls at the posterior temporal locations. The size of alpha attenuation showed no correlation with P300 amplitude or latency in either patients or controls. Furthermore, in patients, the size of alpha attenuation showed no correlation with symptom severity, while P300 amplitude was correlated negatively with the positive subscale score of the Positive and Negative Syndrome Scale. These findings suggest that the symptom-independent reduction in event-related alpha attenuation in schizophrenia may be useful as an electrophysiological index of the impairment of neural processes distinct from that indexed by symptom-dependent P300 abnormalities.

  14. Comparison of DP3 Signals Evoked by Comfortable 3D Images and 2D Images — an Event-Related Potential Study using an Oddball Task

    NASA Astrophysics Data System (ADS)

    Ye, Peng; Wu, Xiang; Gao, Dingguo; Liang, Haowen; Wang, Jiahui; Deng, Shaozhi; Xu, Ningsheng; She, Juncong; Chen, Jun

    2017-02-01

    The horizontal binocular disparity is a critical factor for the visual fatigue induced by watching stereoscopic TVs. Stereoscopic images that possess the disparity within the ‘comfort zones’ and remain still in the depth direction are considered comfortable to the viewers as 2D images. However, the difference in brain activities between processing such comfortable stereoscopic images and 2D images is still less studied. The DP3 (differential P3) signal refers to an event-related potential (ERP) component indicating attentional processes, which is typically evoked by odd target stimuli among standard stimuli in an oddball task. The present study found that the DP3 signal elicited by the comfortable 3D images exhibits the delayed peak latency and enhanced peak amplitude over the anterior and central scalp regions compared to the 2D images. The finding suggests that compared to the processing of the 2D images, more attentional resources are involved in the processing of the stereoscopic images even though they are subjectively comfortable.

  15. Comparison of DP3 Signals Evoked by Comfortable 3D Images and 2D Images - an Event-Related Potential Study using an Oddball Task.

    PubMed

    Ye, Peng; Wu, Xiang; Gao, Dingguo; Liang, Haowen; Wang, Jiahui; Deng, Shaozhi; Xu, Ningsheng; She, Juncong; Chen, Jun

    2017-02-22

    The horizontal binocular disparity is a critical factor for the visual fatigue induced by watching stereoscopic TVs. Stereoscopic images that possess the disparity within the 'comfort zones' and remain still in the depth direction are considered comfortable to the viewers as 2D images. However, the difference in brain activities between processing such comfortable stereoscopic images and 2D images is still less studied. The DP3 (differential P3) signal refers to an event-related potential (ERP) component indicating attentional processes, which is typically evoked by odd target stimuli among standard stimuli in an oddball task. The present study found that the DP3 signal elicited by the comfortable 3D images exhibits the delayed peak latency and enhanced peak amplitude over the anterior and central scalp regions compared to the 2D images. The finding suggests that compared to the processing of the 2D images, more attentional resources are involved in the processing of the stereoscopic images even though they are subjectively comfortable.

  16. Comparison of DP3 Signals Evoked by Comfortable 3D Images and 2D Images — an Event-Related Potential Study using an Oddball Task

    PubMed Central

    Ye, Peng; Wu, Xiang; Gao, Dingguo; Liang, Haowen; Wang, Jiahui; Deng, Shaozhi; Xu, Ningsheng; She, Juncong; Chen, Jun

    2017-01-01

    The horizontal binocular disparity is a critical factor for the visual fatigue induced by watching stereoscopic TVs. Stereoscopic images that possess the disparity within the ‘comfort zones’ and remain still in the depth direction are considered comfortable to the viewers as 2D images. However, the difference in brain activities between processing such comfortable stereoscopic images and 2D images is still less studied. The DP3 (differential P3) signal refers to an event-related potential (ERP) component indicating attentional processes, which is typically evoked by odd target stimuli among standard stimuli in an oddball task. The present study found that the DP3 signal elicited by the comfortable 3D images exhibits the delayed peak latency and enhanced peak amplitude over the anterior and central scalp regions compared to the 2D images. The finding suggests that compared to the processing of the 2D images, more attentional resources are involved in the processing of the stereoscopic images even though they are subjectively comfortable. PMID:28225044

  17. Structure and Topology Dynamics of Hyper-Frequency Networks during Rest and Auditory Oddball Performance.

    PubMed

    Müller, Viktor; Perdikis, Dionysios; von Oertzen, Timo; Sleimen-Malkoun, Rita; Jirsa, Viktor; Lindenberger, Ulman

    2016-01-01

    Resting-state and task-related recordings are characterized by oscillatory brain activity and widely distributed networks of synchronized oscillatory circuits. Electroencephalographic recordings (EEG) were used to assess network structure and network dynamics during resting state with eyes open and closed, and auditory oddball performance through phase synchronization between EEG channels. For this assessment, we constructed a hyper-frequency network (HFN) based on within- and cross-frequency coupling (WFC and CFC, respectively) at 10 oscillation frequencies ranging between 2 and 20 Hz. We found that CFC generally differentiates between task conditions better than WFC. CFC was the highest during resting state with eyes open. Using a graph-theoretical approach (GTA), we found that HFNs possess small-world network (SWN) topology with a slight tendency to random network characteristics. Moreover, analysis of the temporal fluctuations of HFNs revealed specific network topology dynamics (NTD), i.e., temporal changes of different graph-theoretical measures such as strength, clustering coefficient, characteristic path length (CPL), local, and global efficiency determined for HFNs at different time windows. The different topology metrics showed significant differences between conditions in the mean and standard deviation of these metrics both across time and nodes. In addition, using an artificial neural network approach, we found stimulus-related dynamics that varied across the different network topology metrics. We conclude that functional connectivity dynamics (FCD), or NTD, which was found using the HFN approach during rest and stimulus processing, reflects temporal and topological changes in the functional organization and reorganization of neuronal cell assemblies.

  18. Structure and Topology Dynamics of Hyper-Frequency Networks during Rest and Auditory Oddball Performance

    PubMed Central

    Müller, Viktor; Perdikis, Dionysios; von Oertzen, Timo; Sleimen-Malkoun, Rita; Jirsa, Viktor; Lindenberger, Ulman

    2016-01-01

    Resting-state and task-related recordings are characterized by oscillatory brain activity and widely distributed networks of synchronized oscillatory circuits. Electroencephalographic recordings (EEG) were used to assess network structure and network dynamics during resting state with eyes open and closed, and auditory oddball performance through phase synchronization between EEG channels. For this assessment, we constructed a hyper-frequency network (HFN) based on within- and cross-frequency coupling (WFC and CFC, respectively) at 10 oscillation frequencies ranging between 2 and 20 Hz. We found that CFC generally differentiates between task conditions better than WFC. CFC was the highest during resting state with eyes open. Using a graph-theoretical approach (GTA), we found that HFNs possess small-world network (SWN) topology with a slight tendency to random network characteristics. Moreover, analysis of the temporal fluctuations of HFNs revealed specific network topology dynamics (NTD), i.e., temporal changes of different graph-theoretical measures such as strength, clustering coefficient, characteristic path length (CPL), local, and global efficiency determined for HFNs at different time windows. The different topology metrics showed significant differences between conditions in the mean and standard deviation of these metrics both across time and nodes. In addition, using an artificial neural network approach, we found stimulus-related dynamics that varied across the different network topology metrics. We conclude that functional connectivity dynamics (FCD), or NTD, which was found using the HFN approach during rest and stimulus processing, reflects temporal and topological changes in the functional organization and reorganization of neuronal cell assemblies. PMID:27799906

  19. Objective Assessment of Spectral Ripple Discrimination in Cochlear Implant Listeners Using Cortical Evoked Responses to an Oddball Paradigm

    PubMed Central

    Lopez Valdes, Alejandro; Mc Laughlin, Myles; Viani, Laura; Walshe, Peter; Smith, Jaclyn; Zeng, Fan-Gang; Reilly, Richard B.

    2014-01-01

    Cochlear implants (CIs) can partially restore functional hearing in deaf individuals. However, multiple factors affect CI listener's speech perception, resulting in large performance differences. Non-speech based tests, such as spectral ripple discrimination, measure acoustic processing capabilities that are highly correlated with speech perception. Currently spectral ripple discrimination is measured using standard psychoacoustic methods, which require attentive listening and active response that can be difficult or even impossible in special patient populations. Here, a completely objective cortical evoked potential based method is developed and validated to assess spectral ripple discrimination in CI listeners. In 19 CI listeners, using an oddball paradigm, cortical evoked potential responses to standard and inverted spectrally rippled stimuli were measured. In the same subjects, psychoacoustic spectral ripple discrimination thresholds were also measured. A neural discrimination threshold was determined by systematically increasing the number of ripples per octave and determining the point at which there was no longer a significant difference between the evoked potential response to the standard and inverted stimuli. A correlation was found between the neural and the psychoacoustic discrimination thresholds (R2 = 0.60, p<0.01). This method can objectively assess CI spectral resolution performance, providing a potential tool for the evaluation and follow-up of CI listeners who have difficulty performing psychoacoustic tests, such as pediatric or new users. PMID:24599314

  20. Assessment of traumatic brain injury patients by WAIS-R, P300, and performance on oddball task.

    PubMed

    Naito, Yasuo; Ando, Hiroshi; Yamaguchi, Michio

    2005-01-01

    The present research investigates factors that prevent traumatic brain injury patients from returning to work. Participants included 40 patients and 40 healthy individuals. Participants' intelligence quotients and the P300 component of event-related potentials elicited during an auditory oddball task were compared. The patients' mean intelligence quotient was significantly lower than that of the control group. However, some patients had normative intelligence, suggesting that the WAIS-R test results could not fully explain their inability to return to work. The peak of the P300 component could not be determined from recordings of 9 patients. When compared to the control group, the mean latency and amplitude for the remaining 31 patients were significantly longer and smaller, respectively. The mean reaction time of the patients was significantly longer than that of the controls. Omission errors were significantly more frequent in the patient group than among controls, suggesting that the patients were suffering from deficits in the allocation and maintenance of attention. Based on the number of omission errors, patients were divided into a group comprising individuals who committed fewer than two omissions (n=26) and a group comprised of individuals who committed more than three omissions (n=14). The frequent omission errors observed among individuals in the latter group may indicate their inability to sustain an adequate level of vigilance. This deficit would be a factor preventing the patients' return to work.

  1. Dissociating love-related attention from task-related attention: an event-related potential oddball study.

    PubMed

    Langeslag, Sandra J E; Franken, Ingmar H A; Van Strien, Jan W

    2008-02-06

    The present event-related potential (ERP) study was conducted to investigate the P3 component in response to love-related stimuli while controlling for task-related factors, and to dissociate the influences of both love-related and task-related attention on the P3 amplitude. In an oddball paradigm, photographs of beloved and friends served as target and distractor stimuli. Love-related and task-related attention were separated by varying the target and distractor status of the beloved and friends full factorially. As expected, the P3 amplitude was larger for beloved compared to friends and for targets compared to distractors. Moreover, task-related and love-related attention were unconfounded. These results are in line with findings that the P3 is modulated by both emotion- and task-related factors, supporting the view that the P3 amplitude reflects attention. Furthermore, this study validates the notion that romantic love is accompanied by increased attention for stimuli associated with the beloved, and also shows that this form of attention is different from task-related attention.

  2. Brain activity in predominantly-inattentive subtype attention-deficit/hyperactivity disorder during an auditory oddball attention task.

    PubMed

    Orinstein, Alyssa J; Stevens, Michael C

    2014-08-30

    Previous functional neuroimaging studies have found brain activity abnormalities in attention-deficit/hyperactivity disorder (ADHD) on numerous cognitive tasks. However, little is known about brain dysfunction unique to the predominantly-inattentive subtype of ADHD (ADHD-I), despite debate as to whether DSM-IV-defined ADHD subtypes differ in etiology. This study compared brain activity of 18 ADHD-I adolescents (ages 12-18) and 20 non-psychiatric age-matched control participants on a functional magnetic resonance image (fMRI) auditory oddball attention task. ADHD-I participants had significant activation deficits to infrequent target stimuli in bilateral superior temporal gyri, bilateral insula, several midline cingulate/medial frontal gyrus regions, right posterior parietal cortex, thalamus, cerebellum, and brainstem. To novel stimuli, ADHD-I participants had reduced activation in bilateral lateral temporal lobe structures. There were no brain regions where ADHD-I participants had greater hemodynamic activity to targets or novels than controls. Brain activity deficits in ADHD-I participants were found in several regions important to attentional orienting and working memory-related cognitive processes involved in target identification. These results differ from those in previously studied adolescents with combined-subtype ADHD, who had a lesser magnitude of activation abnormalities in frontoparietal regions and relatively more discrete regional deficits to novel stimuli. The divergent findings suggest different etiological factors might underlie attention deficits in different DSM-IV-defined ADHD subtypes, and they have important implications for the DSM-V reconceptualization of subtypes as varying clinical presentations of the same core disorder.

  3. A New Visual Stimulation Program for Improving Visual Acuity in Children with Visual Impairment: A Pilot Study

    PubMed Central

    Tsai, Li-Ting; Hsu, Jung-Lung; Wu, Chien-Te; Chen, Chia-Ching; Su, Yu-Chin

    2016-01-01

    The purpose of this study was to investigate the effectiveness of visual rehabilitation of a computer-based visual stimulation (VS) program combining checkerboard pattern reversal (passive stimulation) with oddball stimuli (attentional modulation) for improving the visual acuity (VA) of visually impaired (VI) children and children with amblyopia and additional developmental problems. Six children (three females, three males; mean age = 3.9 ± 2.3 years) with impaired VA caused by deficits along the anterior and/or posterior visual pathways were recruited. Participants received eight rounds of VS training (two rounds per week) of at least eight sessions per round. Each session consisted of stimulation with 200 or 300 pattern reversals. Assessments of VA (assessed with the Lea symbol VA test or Teller VA cards), visual evoked potential (VEP), and functional vision (assessed with the Chinese-version Functional Vision Questionnaire, FVQ) were carried out before and after the VS program. Significant gains in VA were found after the VS training [VA = 1.05 logMAR ± 0.80 to 0.61 logMAR ± 0.53, Z = –2.20, asymptotic significance (2-tailed) = 0.028]. No significant changes were observed in the FVQ assessment [92.8 ± 12.6 to 100.8 ±SD = 15.4, Z = –1.46, asymptotic significance (2-tailed) = 0.144]. VEP measurement showed improvement in P100 latency and amplitude or integration of the waveform in two participants. Our results indicate that a computer-based VS program with passive checkerboard stimulation, oddball stimulus design, and interesting auditory feedback could be considered as a potential intervention option to improve the VA of a wide age range of VI children and children with impaired VA combined with other neurological disorders. PMID:27148014

  4. Recent Visual Experience Shapes Visual Processing in Rats through Stimulus-Specific Adaptation and Response Enhancement.

    PubMed

    Vinken, Kasper; Vogels, Rufin; Op de Beeck, Hans

    2017-03-20

    From an ecological point of view, it is generally suggested that the main goal of vision in rats and mice is navigation and (aerial) predator evasion [1-3]. The latter requires fast and accurate detection of a change in the visual environment. An outstanding question is whether there are mechanisms in the rodent visual system that would support and facilitate visual change detection. An experimental protocol frequently used to investigate change detection in humans is the oddball paradigm, in which a rare, unexpected stimulus is presented in a train of stimulus repetitions [4]. A popular "predictive coding" theory of cortical responses states that neural responses should decrease for expected sensory input and increase for unexpected input [5, 6]. Despite evidence for response suppression and enhancement in noninvasive scalp recordings in humans with this paradigm [7, 8], it has proven challenging to observe both phenomena in invasive action potential recordings in other animals [9-11]. During a visual oddball experiment, we recorded multi-unit spiking activity in rat primary visual cortex (V1) and latero-intermediate area (LI), which is a higher area of the rodent ventral visual stream. In rat V1, there was only evidence for response suppression related to stimulus-specific adaptation, and not for response enhancement. However, higher up in area LI, spiking activity showed clear surprise-based response enhancement in addition to stimulus-specific adaptation. These results show that neural responses along the rat ventral visual stream become increasingly sensitive to changes in the visual environment, suggesting a system specialized in the detection of unexpected events.

  5. Neuropharmacological effect of methylphenidate on attention network in children with attention deficit hyperactivity disorder during oddball paradigms as assessed using functional near-infrared spectroscopy.

    PubMed

    Nagashima, Masako; Monden, Yukifumi; Dan, Ippeita; Dan, Haruka; Tsuzuki, Daisuke; Mizutani, Tsutomu; Kyutoku, Yasushi; Gunji, Yuji; Momoi, Mariko Y; Watanabe, Eiju; Yamagata, Takanori

    2014-07-01

    The current study aimed to explore the neural substrate for methylphenidate effects on attentional control in school-aged children with attention deficit hyperactivity disorder (ADHD) using functional near-infrared spectroscopy (fNIRS), which can be applied to young children with ADHD more easily than conventional neuroimaging modalities. Using fNIRS, we monitored the oxy-hemoglobin signal changes of 22 ADHD children (6 to 14 years old) performing an oddball task before and 1.5 h after methylphenidate or placebo administration, in a randomized, double-blind, placebo-controlled, crossover design. Twenty-two age- and gender-matched normal controls without methylphenidate administration were also monitored. In the control subjects, the oddball task recruited the right prefrontal and inferior parietal cortices, and this activation was absent in premedicated ADHD children. The reduced right prefrontal activation was normalized after methylphenidate but not placebo administration in ADHD children. These results are consistent with the neuropharmacological effects of methylphenidate to upregulate the dopamine system in the prefrontal cortex innervating from the ventral tegmentum (mesocortical pathway), but not the noradrenergic system from the parietal cortex to the locus coeruleus. Thus, right prefrontal activation would serve as an objective neurofunctional biomarker to indicate the effectiveness of methylphenidate on ADHD children in attentional control. fNIRS monitoring enhances early clinical diagnosis and the treatment of ADHD children, especially those with an inattention phenotype.

  6. Neuropharmacological effect of methylphenidate on attention network in children with attention deficit hyperactivity disorder during oddball paradigms as assessed using functional near-infrared spectroscopy

    PubMed Central

    Nagashima, Masako; Monden, Yukifumi; Dan, Ippeita; Dan, Haruka; Tsuzuki, Daisuke; Mizutani, Tsutomu; Kyutoku, Yasushi; Gunji, Yuji; Momoi, Mariko Y.; Watanabe, Eiju; Yamagata, Takanori

    2014-01-01

    Abstract. The current study aimed to explore the neural substrate for methylphenidate effects on attentional control in school-aged children with attention deficit hyperactivity disorder (ADHD) using functional near-infrared spectroscopy (fNIRS), which can be applied to young children with ADHD more easily than conventional neuroimaging modalities. Using fNIRS, we monitored the oxy-hemoglobin signal changes of 22 ADHD children (6 to 14 years old) performing an oddball task before and 1.5 h after methylphenidate or placebo administration, in a randomized, double-blind, placebo-controlled, crossover design. Twenty-two age- and gender-matched normal controls without methylphenidate administration were also monitored. In the control subjects, the oddball task recruited the right prefrontal and inferior parietal cortices, and this activation was absent in premedicated ADHD children. The reduced right prefrontal activation was normalized after methylphenidate but not placebo administration in ADHD children. These results are consistent with the neuropharmacological effects of methylphenidate to upregulate the dopamine system in the prefrontal cortex innervating from the ventral tegmentum (mesocortical pathway), but not the noradrenergic system from the parietal cortex to the locus coeruleus. Thus, right prefrontal activation would serve as an objective neurofunctional biomarker to indicate the effectiveness of methylphenidate on ADHD children in attentional control. fNIRS monitoring enhances early clinical diagnosis and the treatment of ADHD children, especially those with an inattention phenotype. PMID:26157971

  7. Neuropharmacological effect of atomoxetine on attention network in children with attention deficit hyperactivity disorder during oddball paradigms as assessed using functional near-infrared spectroscopy

    PubMed Central

    Nagashima, Masako; Monden, Yukifumi; Dan, Ippeita; Dan, Haruka; Mizutani, Tsutomu; Tsuzuki, Daisuke; Kyutoku, Yasushi; Gunji, Yuji; Hirano, Daisuke; Taniguchi, Takamichi; Shimoizumi, Hideo; Momoi, Mariko Y.; Yamagata, Takanori; Watanabe, Eiju

    2014-01-01

    Abstract. The current study aimed to explore the neural substrate for atomoxetine effects on attentional control in school-aged children with attention deficit hyperactivity disorder (ADHD) using functional near-infrared spectroscopy (fNIRS), which can be applied to young children with ADHD more easily than conventional neuroimaging modalities. Using fNIRS, we monitored the oxy-hemoglobin signal changes of 15 ADHD children (6 to 14 years old) performing an oddball task before and 1.5 h after atomoxetine or placebo administration, in a randomized, double-blind, placebo-controlled, crossover design. Fifteen age-, gender-, and intelligence quotient-matched normal controls without atomoxetine administration were also monitored. In the control subjects, the oddball task recruited the right prefrontal and inferior parietal cortices. The right prefrontal and parietal activation was normalized after atomoxetine administration in ADHD children. This was in contrast to our previous study using a similar protocol showing methylphenidate-induced normalization of only the right prefrontal function. fNIRS allows the detection of differential neuropharmacological profiles of both substances in the attentional network: the neuropharmacological effects of atomoxetine to upregulate the noradrenergic system reflected in the right prefrontal and inferior parietal activations and those of methylphenidate to upregulate the dopamine system reflected in the prefrontal cortex activation. PMID:26157979

  8. Go and no-go P3 with rare and frequent stimuli in oddball tasks: A study comparing key-pressing with counting.

    PubMed

    Verleger, Rolf; Grauhan, Nils; Śmigasiewicz, Kamila

    2016-12-01

    The P3 component of event-related potentials (ERPs) is large at posterior scalp sites with rare go stimuli (go-P3) and at anterior sites with rare no-go stimuli (no-go P3). Most hypotheses on P3, including our S-R link reactivation notion, imply that these characteristics are independent of specific response modes. This assumption was here investigated by comparing ERPs between key-pressing and covert counting responses in oddball tasks that required responses to either frequent or rare stimuli. Replicating previous results, topographic differences between parietal rare go-P3 and fronto-central rare no-go P3 were much reduced with counting compared to key-press responses. Besides, while go-P3 was generally more positive than no-go P3 in the key-press tasks (except for fronto-central no-go P3 with rare stimuli) the reverse was true in the counting tasks. Thus, the characteristic posterior and fronto-central go and no-go topographies appear to be specific to hand movements. Moreover, P3s evoked in counting tasks might not simply be P3 proper, uncontaminated by movement-related potentials, but rather the go/no-go logic might differ between counting and key-pressing, with the oddball effects being affected by specific processes implemented for these particular response modes.

  9. Task difficulty affects the predictive process indexed by visual mismatch negativity.

    PubMed

    Kimura, Motohiro; Takeda, Yuji

    2013-01-01

    Visual mismatch negativity (MMN) is an event-related brain potential (ERP) component that is elicited by prediction-incongruent events in successive visual stimulation. Previous oddball studies have shown that visual MMN in response to task-irrelevant deviant stimuli is insensitive to the manipulation of task difficulty, which supports the notion that visual MMN reflects attention-independent predictive processes. In these studies, however, visual MMN was evaluated in deviant-minus-standard difference waves, which may lead to an underestimation of the effects of task difficulty due to the possible superposition of N1-difference reflecting refractory effects. In the present study, we investigated the effects of task difficulty on visual MMN, less contaminated by N1-difference. While the participant performed a size-change detection task regarding a continuously-presented central fixation circle, we presented oddball sequences consisting of deviant and standard bar stimuli with different orientations (9.1 and 90.9%) and equiprobable sequences consisting of 11 types of control bar stimuli with different orientations (9.1% each) at the surrounding visual fields. Task difficulty was manipulated by varying the magnitude of the size-change. We found that the peak latencies of visual MMN evaluated in the deviant-minus-control difference waves were delayed as a function of task difficulty. Therefore, in contrast to the previous understanding, the present findings support the notion that visual MMN is associated with attention-demanding predictive processes.

  10. Preattentive Processing of Numerical Visual Information.

    PubMed

    Hesse, Philipp N; Schmitt, Constanze; Klingenhoefer, Steffen; Bremmer, Frank

    2017-01-01

    Humans can perceive and estimate approximate numerical information, even when accurate counting is impossible e.g., due to short presentation time. If the number of objects to be estimated is small, typically around 1-4 items, observers are able to give very fast and precise judgments with high confidence-an effect that is called subitizing. Due to its speed and effortless nature subitizing has usually been assumed to be preattentive, putting it into the same category as other low level visual features like color or orientation. More recently, however, a number of studies have suggested that subitizing might be dependent on attentional resources. In our current study we investigated the potentially preattentive nature of visual numerical perception in the subitizing range by means of EEG. We presented peripheral, task irrelevant sequences of stimuli consisting of a certain number of circular patches while participants were engaged in a demanding, non-numerical detection task at the fixation point drawing attention away from the number stimuli. Within a sequence of stimuli of a given number of patches (called "standards") we interspersed some stimuli of different numerosity ("oddballs"). We compared the evoked responses to visually identical stimuli that had been presented in two different conditions, serving as standard in one condition and as oddball in the other. We found significant visual mismatch negativity (vMMN) responses over parieto-occipital electrodes. In addition to the event-related potential (ERP) analysis, we performed a time-frequency analysis (TFA) to investigate whether the vMMN was accompanied by additional oscillatory processes. We found a concurrent increase in evoked theta power of similar strength over both hemispheres. Our results provide clear evidence for a preattentive processing of numerical visual information in the subitizing range.

  11. The development of the N1 and N2 components in auditory oddball paradigms: a systematic review with narrative analysis and suggested normative values.

    PubMed

    Tomé, David; Barbosa, Fernando; Nowak, Kamila; Marques-Teixeira, João

    2015-03-01

    Auditory event-related potentials (AERPs) are widely used in diverse fields of today's neuroscience, concerning auditory processing, speech perception, language acquisition, neurodevelopment, attention and cognition in normal aging, gender, developmental, neurologic and psychiatric disorders. However, its transposition to clinical practice has remained minimal. Mainly due to scarce literature on normative data across age, wide spectrum of results, variety of auditory stimuli used and to different neuropsychological meanings of AERPs components between authors. One of the most prominent AERP components studied in last decades was N1, which reflects auditory detection and discrimination. Subsequently, N2 indicates attention allocation and phonological analysis. The simultaneous analysis of N1 and N2 elicited by feasible novelty experimental paradigms, such as auditory oddball, seems an objective method to assess central auditory processing. The aim of this systematic review was to bring forward normative values for auditory oddball N1 and N2 components across age. EBSCO, PubMed, Web of Knowledge and Google Scholar were systematically searched for studies that elicited N1 and/or N2 by auditory oddball paradigm. A total of 2,764 papers were initially identified in the database, of which 19 resulted from hand search and additional references, between 1988 and 2013, last 25 years. A final total of 68 studies met the eligibility criteria with a total of 2,406 participants from control groups for N1 (age range 6.6-85 years; mean 34.42) and 1,507 for N2 (age range 9-85 years; mean 36.13). Polynomial regression analysis revealed that N1 latency decreases with aging at Fz and Cz, N1 amplitude at Cz decreases from childhood to adolescence and stabilizes after 30-40 years and at Fz the decrement finishes by 60 years and highly increases after this age. Regarding N2, latency did not covary with age but amplitude showed a significant decrement for both Cz and Fz. Results

  12. Cortical activity and children's rituals, habits and other repetitive behavior: a visual P300 study.

    PubMed

    Evans, David W; Maliken, Ashley

    2011-10-10

    This study examines the link between children's repetitive, ritualistic, behavior and cortical brain activity. Twelve typically developing children between the ages of 6 and 12 years were administered two visual P300, oddball tasks with a 32-electrode electroencephalogram (EEG) system. One of the oddball tasks was specifically designed to reflect sensitivity to asymmetry, a phenomenon common in children and in a variety of disorders involving compulsive behavior. Parents completed the Childhood Routines Inventory. Children's repetitive, compulsive-like behaviors were strongly associated with faster processing of an asymmetrical target stimulus, even when accounting for their P300 latencies on a control task. The research punctuates the continuity between observed brain-behavior links in clinical disorders such as OCD and autism spectrum disorders, and normative variants of repetitive behavior.

  13. Visual and audiovisual effects of isochronous timing on visual perception and brain activity.

    PubMed

    Marchant, Jennifer L; Driver, Jon

    2013-06-01

    Understanding how the brain extracts and combines temporal structure (rhythm) information from events presented to different senses remains unresolved. Many neuroimaging beat perception studies have focused on the auditory domain and show the presence of a highly regular beat (isochrony) in "auditory" stimulus streams enhances neural responses in a distributed brain network and affects perceptual performance. Here, we acquired functional magnetic resonance imaging (fMRI) measurements of brain activity while healthy human participants performed a visual task on isochronous versus randomly timed "visual" streams, with or without concurrent task-irrelevant sounds. We found that visual detection of higher intensity oddball targets was better for isochronous than randomly timed streams, extending previous auditory findings to vision. The impact of isochrony on visual target sensitivity correlated positively with fMRI signal changes not only in visual cortex but also in auditory sensory cortex during audiovisual presentations. Visual isochrony activated a similar timing-related brain network to that previously found primarily in auditory beat perception work. Finally, activity in multisensory left posterior superior temporal sulcus increased specifically during concurrent isochronous audiovisual presentations. These results indicate that regular isochronous timing can modulate visual processing and this can also involve multisensory audiovisual brain mechanisms.

  14. Effects of a small dose of olanzapine on healthy subjects according to their schizotypy: an ERP study using a semantic categorization and an oddball task.

    PubMed

    Debruille, J Bruno; Rodier, Mitchell; Prévost, Marie; Lionnet, Claire; Molavi, Siamak

    2013-05-01

    Delusions and hallucinations are often meaningful. They thus reveal abnormal semantic activations. To start testing whether antipsychotics act by reducing abnormal semantic activations we focused on the N400 event-related brain potential, which is elicited by meaningful stimuli, such as words, and whose distribution on the scalp is known to depend on the semantic category of these stimuli. We used a semantic-categorization task specially designed to reduce the impact of the variations of context processing across subjects' groups and a classical oddball task as a control. Healthy subjects were recruited rather than psychotic patients to ensure that the medication effects could not be secondary to a reduction of symptoms. These participants (n=47) were tested in a double-blind cross-over paradigm where the ERP effects of 2.5mg of olanzapine taken on the eve of the testing were compared to those of the placebo. The amplitudes of the N400s elicited by the target words were greater at anterior scalp sites in the half of the subjects having higher schizotypal scores. Olanzapine reduced these larger N400s and had no effect on the small anterior N400s of the half of the subjects with lower scores. These results are discussed as consistent with the idea that antipsychotics reduce abnormal activations of particular semantic representations. Further studies should thus be done to see if this reduction correlates with and predicts the decrease of psychotic symptoms in patients.

  15. Strong Radio Emission from a Hyperactive L Dwarf: A Low-Mass Oddball or a Rosetta Stone for Ultracool Dwarf Activity?

    NASA Astrophysics Data System (ADS)

    Burgasser, Adam J.; Melis, C.; Zauderer, B. A.; Berger, E.

    2013-01-01

    We report the detection of radio emission at 5.5 GHz from the unusually active L5 + T7 binary 2MASS J13153094-2649513AB, based on observations conducted with the Australian Telescope Compact Array. An unresolved source at the proper-motion-corrected position of 2MASS J1315-2649AB was detected with a continuum flux of 0.37+/-0.05 mJy, corresponding to a radio luminosity L_rad = (9+/-3)x10^23 erg/s or log(L_rad/L_bol) = -5.24+/-0.22. While we cannot resolve the emission to one or both components, its strength strongly favors the L5 primary, making this component the latest-type L dwarf to be detected in the radio. No detection is made at 9.0 GHz to a 5-sigma limit of 0.29 mJy, consistent with a declining power law spectrum scaling as nu^-0.5 or steeper. The emission is quiescent, with no evidence of variability or bursts over 3 hours, and no measurable polarization (V/I < 34%). 2MASS J1315-2649AB is one of the most radio-luminous ultracool dwarfs detected in quiescent emission to date, comparable in strength to other ultracool dwarfs detected while in outburst. Its combination of strong and persistent H-alpha and radio emission is unique among L dwarfs, but we find no evidence of interaction between primary and secondary. We suggest further observations that may reveal whether 2MASS J1315-2649AB is a true oddball or a benchmark for understanding the origins of activity in the coldest stars and brown dwarfs.

  16. Sex Differences in Gamma Band Functional Connectivity Between the Frontal Lobe and Cortical Areas During an Auditory Oddball Task, as Revealed by Imaginary Coherence Assessment

    PubMed Central

    Fujimoto, Toshiro; Okumura, Eiichi; Kodabashi, Atsushi; Takeuchi, Kouzou; Otsubo, Toshiaki; Nakamura, Katsumi; Yatsushiro, Kazutaka; Sekine, Masaki; Kamiya, Shinichiro; Shimooki, Susumu; Tamura, Toshiyo

    2016-01-01

    We studied sex-related differences in gamma oscillation during an auditory oddball task, using magnetoencephalography and electroencephalography assessment of imaginary coherence (IC). We obtained a statistical source map of event-related desynchronization (ERD) / event-related synchronization (ERS), and compared females and males regarding ERD / ERS. Based on the results, we chose respectively seed regions for IC determinations in low (30-50 Hz), mid (50-100 Hz) and high gamma (100-150 Hz) bands. In males, ERD was increased in the left posterior cingulate cortex (CGp) at 500 ms in the low gamma band, and in the right caudal anterior cingulate cortex (cACC) at 125 ms in the mid-gamma band. ERS was increased in the left rostral anterior cingulate cortex (rACC) at 375 ms in the high gamma band. We chose the CGp, cACC and rACC as seeds, and examined IC between the seed and certain target regions using the IC map. IC changes depended on the height of the gamma frequency and the time window in the gamma band. Although IC in the mid and high gamma bands did not show sex-specific differences, IC at 30-50 Hz in males was increased between the left rACC and the frontal, orbitofrontal, inferior temporal and fusiform target regions. Increased IC in males suggested that males may acomplish the task constructively, analysingly, emotionally, and by perfoming analysis, and that information processing was more complicated in the cortico-cortical circuit. On the other hand, females showed few differences in IC. Females planned the task with general attention and economical well-balanced processing, which was explained by the higher overall functional cortical connectivity. CGp, cACC and rACC were involved in sex differences in information processing and were likely related to differences in neuroanatomy, hormones and neurotransmitter systems. PMID:27708745

  17. Brain Dynamics of Aging: Multiscale Variability of EEG Signals at Rest and during an Auditory Oddball Task1,2,3

    PubMed Central

    Sleimen-Malkoun, Rita; Perdikis, Dionysios; Müller, Viktor; Blanc, Jean-Luc; Huys, Raoul; Temprado, Jean-Jacques

    2015-01-01

    Abstract The present work focused on the study of fluctuations of cortical activity across time scales in young and older healthy adults. The main objective was to offer a comprehensive characterization of the changes of brain (cortical) signal variability during aging, and to make the link with known underlying structural, neurophysiological, and functional modifications, as well as aging theories. We analyzed electroencephalogram (EEG) data of young and elderly adults, which were collected at resting state and during an auditory oddball task. We used a wide battery of metrics that typically are separately applied in the literature, and we compared them with more specific ones that address their limits. Our procedure aimed to overcome some of the methodological limitations of earlier studies and verify whether previous findings can be reproduced and extended to different experimental conditions. In both rest and task conditions, our results mainly revealed that EEG signals presented systematic age-related changes that were time-scale-dependent with regard to the structure of fluctuations (complexity) but not with regard to their magnitude. Namely, compared with young adults, the cortical fluctuations of the elderly were more complex at shorter time scales, but less complex at longer scales, although always showing a lower variance. Additionally, the elderly showed signs of spatial, as well as between, experimental conditions dedifferentiation. By integrating these so far isolated findings across time scales, metrics, and conditions, the present study offers an overview of age-related changes in the fluctuation electrocortical activity while making the link with underlying brain dynamics. PMID:26464983

  18. Effect of interstimulus interval on visual P300 in Parkinson's disease

    PubMed Central

    Wang, L.; Kuroiwa, Y.; Kamitani, T.; Takahashi, T.; Suzuki, Y.; Hasegawa, O.

    1999-01-01

    OBJECTIVE—Visual event related potentials (ERPs) were studied during an oddball paradigm, to testify whether cognitive slowing in Parkinson's disease exists and to investigate whether cognitive information processing can be influenced by different interstimulus intervals (ISIs) of an oddball task in patients with Parkinson's disease and normal subjects.
METHODS—ERPs and reaction time were measured in 38 non-demented patients with Parkinson's disease and 24 healthy elderly subjects. A visual oddball paradigm was employed to evoke ERPs, at three different interstimulus (ISI) intervals: ISI(S), 1600 ms; ISI(M), 3100 ms; and ISI(L), 5100 ms. The effect of ISIs on ERPs and reaction time was investigated.
RESULTS—Compared with the normal subjects, P300 latency at Cz and Pz was significantly delayed after rare target stimuli in patients with Parkinson's disease only at ISI(L). Reaction time was prolonged in patients at all the three ISIs, compared with the normal controls. There was also significantly delayed N200 and reduced P300 amplitude at Cz and/or Pz to rare non-target stimuli in patients at the three ISIs, compared with the normal controls. During rare target visual stimulation, P300 latency and reaction time in the patients with Parkinson's disease and reaction time in the normal subjects were gradually prolonged as the ISI increased.
CONCLUSION—Prolonged N200 latency to rare non-target stimuli might indicate that automatic cognitive processing was slowed in Parkinson's disease. Cognitive processing reflected by P300 latency to rare target stimuli was influenced by longer ISI in patients with Parkinson's disease.

 PMID:10486398

  19. Evaluation of passive polarized stereoscopic 3D display for visual & mental fatigues.

    PubMed

    Amin, Hafeez Ullah; Malik, Aamir Saeed; Mumtaz, Wajid; Badruddin, Nasreen; Kamel, Nidal

    2015-01-01

    Visual and mental fatigues induced by active shutter stereoscopic 3D (S3D) display have been reported using event-related brain potentials (ERP). An important question, that is whether such effects (visual & mental fatigues) can be found in passive polarized S3D display, is answered here. Sixty-eight healthy participants are divided into 2D and S3D groups and subjected to an oddball paradigm after being exposed to S3D videos with passive polarized display or 2D display. The age and fluid intelligence ability of the participants are controlled between the groups. ERP results do not show any significant differences between S3D and 2D groups to find the aftereffects of S3D in terms of visual and mental fatigues. Hence, we conclude that passive polarized S3D display technology may not induce visual and/or mental fatigue which may increase the cognitive load and suppress the ERP components.

  20. Visual field

    MedlinePlus

    Perimetry; Tangent screen exam; Automated perimetry exam; Goldmann visual field exam; Humphrey visual field exam ... Confrontation visual field exam : This is a quick and basic check of the visual field. The health care provider ...

  1. Visual Impairment

    MedlinePlus

    ... Loss Surgery? A Week of Healthy Breakfasts Shyness Visual Impairment KidsHealth > For Teens > Visual Impairment Print A ... with the brain, making vision impossible. What Is Visual Impairment? Many people have some type of visual ...

  2. Snap Your Fingers! An ERP/sLORETA Study Investigating Implicit Processing of Self- vs. Other-Related Movement Sounds Using the Passive Oddball Paradigm.

    PubMed

    Justen, Christoph; Herbert, Cornelia

    2016-01-01

    So far, neurophysiological studies have investigated implicit and explicit self-related processing particularly for self-related stimuli such as the own face or name. The present study extends previous research to the implicit processing of self-related movement sounds and explores their spatio-temporal dynamics. Event-related potentials (ERPs) were assessed while participants (N = 12 healthy subjects) listened passively to previously recorded self- and other-related finger snapping sounds, presented either as deviants or standards during an oddball paradigm. Passive listening to low (500 Hz) and high (1000 Hz) pure tones served as additional control. For self- vs. other-related finger snapping sounds, analysis of ERPs revealed significant differences in the time windows of the N2a/MMN and P3. An subsequent source localization analysis with standardized low-resolution brain electromagnetic tomography (sLORETA) revealed increased cortical activation in distinct motor areas such as the supplementary motor area (SMA) in the N2a/mismatch negativity (MMN) as well as the P3 time window during processing of self- and other-related finger snapping sounds. In contrast, brain regions associated with self-related processing [e.g., right anterior/posterior cingulate cortex (ACC/PPC)] as well as the right inferior parietal lobule (IPL) showed increased activation particularly during processing of self- vs. other-related finger snapping sounds in the time windows of the N2a/MMN (ACC/PCC) or the P3 (IPL). None of these brain regions showed enhanced activation while listening passively to low (500 Hz) and high (1000 Hz) pure tones. Taken together, the current results indicate (1) a specific role of motor regions such as SMA during auditory processing of movement-related information, regardless of whether this information is self- or other-related, (2) activation of neural sources such as the ACC/PCC and the IPL during implicit processing of self-related movement stimuli, and (3

  3. Snap Your Fingers! An ERP/sLORETA Study Investigating Implicit Processing of Self- vs. Other-Related Movement Sounds Using the Passive Oddball Paradigm

    PubMed Central

    Justen, Christoph; Herbert, Cornelia

    2016-01-01

    So far, neurophysiological studies have investigated implicit and explicit self-related processing particularly for self-related stimuli such as the own face or name. The present study extends previous research to the implicit processing of self-related movement sounds and explores their spatio-temporal dynamics. Event-related potentials (ERPs) were assessed while participants (N = 12 healthy subjects) listened passively to previously recorded self- and other-related finger snapping sounds, presented either as deviants or standards during an oddball paradigm. Passive listening to low (500 Hz) and high (1000 Hz) pure tones served as additional control. For self- vs. other-related finger snapping sounds, analysis of ERPs revealed significant differences in the time windows of the N2a/MMN and P3. An subsequent source localization analysis with standardized low-resolution brain electromagnetic tomography (sLORETA) revealed increased cortical activation in distinct motor areas such as the supplementary motor area (SMA) in the N2a/mismatch negativity (MMN) as well as the P3 time window during processing of self- and other-related finger snapping sounds. In contrast, brain regions associated with self-related processing [e.g., right anterior/posterior cingulate cortex (ACC/PPC)] as well as the right inferior parietal lobule (IPL) showed increased activation particularly during processing of self- vs. other-related finger snapping sounds in the time windows of the N2a/MMN (ACC/PCC) or the P3 (IPL). None of these brain regions showed enhanced activation while listening passively to low (500 Hz) and high (1000 Hz) pure tones. Taken together, the current results indicate (1) a specific role of motor regions such as SMA during auditory processing of movement-related information, regardless of whether this information is self- or other-related, (2) activation of neural sources such as the ACC/PCC and the IPL during implicit processing of self-related movement stimuli, and (3

  4. Visual Perception versus Visual Function.

    ERIC Educational Resources Information Center

    Lieberman, Laurence M.

    1984-01-01

    Disfunctions are drawn between visual perception and visual function, and four optometrists respond with further analysis of the visual perception-visual function controversy and its implications for children with learning problems. (CL)

  5. The visual mismatch negativity (vMMN): toward the optimal paradigm.

    PubMed

    Qian, Xiaosen; Liu, Yi; Xiao, Bing; Gao, Li; Li, Songlin; Dang, Lijie; Si, Cuiping; Zhao, Lun

    2014-09-01

    In the present article, we tested an optimal vMMN paradigm allowing one to obtain vMMNs for several visual attributes in a short time. vMMN responses to changes in color, duration, orientation, shape, and size were compared between the traditional 'oddball' paradigm (a single type of visual change in each sequence) and the optimal paradigm in which all the 5 types of changes appeared within the same sequence. The vMMNs obtained in the optimal paradigm were equal or larger in amplitude to those in the traditional vMMN paradigm. The optimal paradigm can provide 5 different vMMNs in the same time in which usually only one MMN is obtained. This short objective measure could putatively be used as an index for visual cognition function especially in clinical research.

  6. Preattentive Processing of Numerical Visual Information

    PubMed Central

    Hesse, Philipp N.; Schmitt, Constanze; Klingenhoefer, Steffen; Bremmer, Frank

    2017-01-01

    Humans can perceive and estimate approximate numerical information, even when accurate counting is impossible e.g., due to short presentation time. If the number of objects to be estimated is small, typically around 1–4 items, observers are able to give very fast and precise judgments with high confidence—an effect that is called subitizing. Due to its speed and effortless nature subitizing has usually been assumed to be preattentive, putting it into the same category as other low level visual features like color or orientation. More recently, however, a number of studies have suggested that subitizing might be dependent on attentional resources. In our current study we investigated the potentially preattentive nature of visual numerical perception in the subitizing range by means of EEG. We presented peripheral, task irrelevant sequences of stimuli consisting of a certain number of circular patches while participants were engaged in a demanding, non-numerical detection task at the fixation point drawing attention away from the number stimuli. Within a sequence of stimuli of a given number of patches (called “standards”) we interspersed some stimuli of different numerosity (“oddballs”). We compared the evoked responses to visually identical stimuli that had been presented in two different conditions, serving as standard in one condition and as oddball in the other. We found significant visual mismatch negativity (vMMN) responses over parieto-occipital electrodes. In addition to the event-related potential (ERP) analysis, we performed a time-frequency analysis (TFA) to investigate whether the vMMN was accompanied by additional oscillatory processes. We found a concurrent increase in evoked theta power of similar strength over both hemispheres. Our results provide clear evidence for a preattentive processing of numerical visual information in the subitizing range. PMID:28261078

  7. Visual Evoked Responses During Standing and Walking

    PubMed Central

    Gramann, Klaus; Gwin, Joseph T.; Bigdely-Shamlo, Nima; Ferris, Daniel P.; Makeig, Scott

    2010-01-01

    Human cognition has been shaped both by our body structure and by its complex interactions with its environment. Our cognition is thus inextricably linked to our own and others’ motor behavior. To model brain activity associated with natural cognition, we propose recording the concurrent brain dynamics and body movements of human subjects performing normal actions. Here we tested the feasibility of such a mobile brain/body (MoBI) imaging approach by recording high-density electroencephalographic (EEG) activity and body movements of subjects standing or walking on a treadmill while performing a visual oddball response task. Independent component analysis of the EEG data revealed visual event-related potentials that during standing, slow walking, and fast walking did not differ across movement conditions, demonstrating the viability of recording brain activity accompanying cognitive processes during whole body movement. Non-invasive and relatively low-cost MoBI studies of normal, motivated actions might improve understanding of interactions between brain and body dynamics leading to more complete biological models of cognition. PMID:21267424

  8. Visual agnosia.

    PubMed

    Álvarez, R; Masjuan, J

    2016-03-01

    Visual agnosia is defined as an impairment of object recognition, in the absence of visual acuity or cognitive dysfunction that would explain this impairment. This condition is caused by lesions in the visual association cortex, sparing primary visual cortex. There are 2 main pathways that process visual information: the ventral stream, tasked with object recognition, and the dorsal stream, in charge of locating objects in space. Visual agnosia can therefore be divided into 2 major groups depending on which of the two streams is damaged. The aim of this article is to conduct a narrative review of the various visual agnosia syndromes, including recent developments in a number of these syndromes.

  9. The magnitude of the central visual field could be detected by active middle-late processing of ERPs.

    PubMed

    Li, Maojuan; Liu, Xiaoqin; Li, Qianqian; Ji, Mengmeng; Huang, Wenwen; Zhang, Mingyang; Wang, Tao; Luo, Chengliang; Wang, Zufeng; Chen, Xiping; Tao, Luyang

    2016-11-01

    This study investigated the changes in event-related potential (ERP) waveforms under different central visual field conditions using a three-stimulus oddball paradigm. Circular checkerboards were presented in the center of a computer screen with a visual angle of 5°, 10°, 20°, or 30°, which were regarded as target stimuli. The ERP waveforms were analyzed separately for different stimulus conditions. Participants responded more slowly and had lower accuracy for the 30° visual field level than the other three visual field levels. The ERP results revealed that the amplitudes of target P2 gradually increased from the 5° to 20° visual field conditions, while they decreased abruptly in the 30° visual field condition. Regional effects showed that the amplitudes of target P2 were larger from the occipital electrodes than that from the temporal sites. Besides the negative-going deflection of target N2 and visual mismatch negativity (vMMN) components having an increasing tendency with expansion of the visual field, there was also a trend that the amplitudes of target P3 were decreased and the peak latencies were prolonged with increasing visual field ranges. In addition, the latencies of the difference P3 had a similar trend to the latencies of the target P3, and all the differences were more obvious at the 30° visual field level. The study demonstrated that middle-late components of ERPs can reflect changes in the visual field to some extent.

  10. Role of the anterior insular cortex in integrative causal signaling during multisensory auditory-visual attention.

    PubMed

    Chen, Tianwen; Michels, Lars; Supekar, Kaustubh; Kochalka, John; Ryali, Srikanth; Menon, Vinod

    2015-01-01

    Coordinated attention to information from multiple senses is fundamental to our ability to respond to salient environmental events, yet little is known about brain network mechanisms that guide integration of information from multiple senses. Here we investigate dynamic causal mechanisms underlying multisensory auditory-visual attention, focusing on a network of right-hemisphere frontal-cingulate-parietal regions implicated in a wide range of tasks involving attention and cognitive control. Participants performed three 'oddball' attention tasks involving auditory, visual and multisensory auditory-visual stimuli during fMRI scanning. We found that the right anterior insula (rAI) demonstrated the most significant causal influences on all other frontal-cingulate-parietal regions, serving as a major causal control hub during multisensory attention. Crucially, we then tested two competing models of the role of the rAI in multisensory attention: an 'integrated' signaling model in which the rAI generates a common multisensory control signal associated with simultaneous attention to auditory and visual oddball stimuli versus a 'segregated' signaling model in which the rAI generates two segregated and independent signals in each sensory modality. We found strong support for the integrated, rather than the segregated, signaling model. Furthermore, the strength of the integrated control signal from the rAI was most pronounced on the dorsal anterior cingulate and posterior parietal cortices, two key nodes of saliency and central executive networks respectively. These results were preserved with the addition of a superior temporal sulcus region involved in multisensory processing. Our study provides new insights into the dynamic causal mechanisms by which the AI facilitates multisensory attention.

  11. Supra-additive contribution of shape and surface information to individual face discrimination as revealed by fast periodic visual stimulation.

    PubMed

    Dzhelyova, Milena; Rossion, Bruno

    2014-12-24

    Face perception depends on two main sources of information--shape and surface cues. Behavioral studies suggest that both of them contribute roughly equally to discrimination of individual faces, with only a small advantage provided by their combination. However, it is difficult to quantify the respective contribution of each source of information to the visual representation of individual faces with explicit behavioral measures. To address this issue, facial morphs were created that varied in shape only, surface only, or both. Electrocephalogram (EEG) were recorded from 10 participants during visual stimulation at a fast periodic rate, in which the same face was presented four times consecutively and the fifth face (the oddball) varied along one of the morphed dimensions. Individual face discrimination was indexed by the periodic EEG response at the oddball rate (e.g., 5.88 Hz/5 = 1.18 Hz). While shape information was discriminated mainly at right occipitotemporal electrode sites, surface information was coded more bilaterally and provided a larger response overall. Most importantly, shape and surface changes alone were associated with much weaker responses than when both sources of information were combined in the stimulus, revealing a supra-additive effect. These observations suggest that the two kinds of information combine nonlinearly to provide a full individual face representation, face identity being more than the sum of the contribution of shape and surface cues.

  12. Event-Related Potentials in Parkinson's Disease Patients with Visual Hallucination

    PubMed Central

    Liou, Li-Min

    2016-01-01

    Using neuropsychological investigation and visual event-related potentials (ERPs), we aimed to compare the ERPs and cognitive function of nondemented Parkinson's disease (PD) patients with and without visual hallucinations (VHs) and of control subjects. We recruited 12 PD patients with VHs (PD-H), 23 PD patients without VHs (PD-NH), and 18 age-matched controls. All subjects underwent comprehensive neuropsychological assessment and visual ERPs measurement. A visual odd-ball paradigm with two different fixed interstimulus intervals (ISI) (1600 ms and 5000 ms) elicited visual ERPs. The frontal test battery was used to assess attention, visual-spatial function, verbal fluency, memory, higher executive function, and motor programming. The PD-H patients had significant cognitive dysfunction in several domains, compared to the PD-NH patients and controls. The mean P3 latency with ISI of 1600 ms in PD-H patients was significantly longer than that in controls. Logistic regression disclosed UPDRS-on score and P3 latency as significant predictors of VH. Our findings suggest that nondemented PD-H patients have worse cognitive function and P3 measurements. The development of VHs in nondemented PD patients might be implicated in executive dysfunction with altered visual information processing. PMID:28053801

  13. Visual signatures in video visualization.

    PubMed

    Chen, Min; Botchen, Ralf P; Hashim, Rudy R; Weiskopf, Daniel; Ertl, Thomas; Thornton, Ian M

    2006-01-01

    Video visualization is a computation process that extracts meaningful information from original video data sets and conveys the extracted information to users in appropriate visual representations. This paper presents a broad treatment of the subject, following a typical research pipeline involving concept formulation, system development, a path-finding user study, and a field trial with real application data. In particular, we have conducted a fundamental study on the visualization of motion events in videos. We have, for the first time, deployed flow visualization techniques in video visualization. We have compared the effectiveness of different abstract visual representations of videos. We have conducted a user study to examine whether users are able to learn to recognize visual signatures of motions, and to assist in the evaluation of different visualization techniques. We have applied our understanding and the developed techniques to a set of application video clips. Our study has demonstrated that video visualization is both technically feasible and cost-effective. It has provided the first set of evidence confirming that ordinary users can be accustomed to the visual features depicted in video visualizations, and can learn to recognize visual signatures of a variety of motion events.

  14. Visual Imagery without Visual Perception?

    ERIC Educational Resources Information Center

    Bertolo, Helder

    2005-01-01

    The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review…

  15. Subclinical alexithymia modulates early audio-visual perceptive and attentional event-related potentials

    PubMed Central

    Delle-Vigne, Dyna; Kornreich, Charles; Verbanck, Paul; Campanella, Salvatore

    2014-01-01

    Introduction: Previous studies have highlighted the advantage of using audio–visual oddball tasks (instead of unimodal ones) in order to electrophysiologically index subclinical behavioral differences. Since alexithymia is highly prevalent in the general population, we investigated whether the use of various bimodal tasks could elicit emotional effects in low- vs. high-alexithymic scorers. Methods: Fifty students (33 females and 17 males) were split into groups based on low and high scores on the Toronto Alexithymia Scale (TAS-20). During event-related potential (ERP) recordings, they were exposed to three kinds of audio–visual oddball tasks: neutral-AVN—(geometrical forms and bips), animal-AVA—(dog and cock with their respective shouts), or emotional-AVE—(faces and voices) stimuli. In each condition, participants were asked to quickly detect deviant events occurring amongst a train of repeated and frequent matching stimuli (e.g., push a button when a sad face–voice pair appeared amongst a train of neutral face–voice pairs). P100, N100, and P300 components were analyzed: P100 refers to visual perceptive and attentional processing, N100 to auditory ones, and the P300 relates to response-related stages, involving memory processes. Results: High-alexithymic scorers presented a particular pattern of results when processing the emotional stimulations, reflected in early ERP components by increased P100 and N100 amplitudes in the emotional oddball tasks [P100: F(2, 48) = 20,319, p < 0.001; N100: F(2, 96) = 8,807, p = 0.001] as compared to the animal or neutral ones. Indeed, regarding the P100, subjects exhibited a higher amplitude in the AVE condition (8.717 μV), which was significantly different from that observed during the AVN condition (4.382 μV, p < 0.001). For the N100, the highest amplitude was found in the AVE condition (−4.035 μV) and the lowest was observed in the AVN condition (−2.687 μV, p = 0.003). However, no effect was found on the

  16. Global Image Dissimilarity in Macaque Inferotemporal Cortex Predicts Human Visual Search Efficiency

    PubMed Central

    Sripati, Arun P.; Olson, Carl R.

    2010-01-01

    Finding a target in a visual scene can be easy or difficult depending on the nature of the distractors. Research in humans has suggested that search is more difficult the more similar the target and distractors are to each other. However, it has not yielded an objective definition of similarity. We hypothesized that visual search performance depends on similarity as determined by the degree to which two images elicit overlapping patterns of neuronal activity in visual cortex. To test this idea, we recorded from neurons in monkey inferotemporal cortex (IT) and assessed visual search performance in humans using pairs of images formed from the same local features in different global arrangements. The ability of IT neurons to discriminate between two images was strongly predictive of the ability of humans to discriminate between them during visual search, accounting overall for 90% of the variance in human performance. A simple physical measure of global similarity – the degree of overlap between the coarse footprints of a pair of images – largely explains both the neuronal and the behavioral results. To explain the relation between population activity and search behavior, we propose a model in which the efficiency of global oddball search depends on contrast-enhancing lateral interactions in high-order visual cortex. PMID:20107054

  17. Mathematical Visualization

    ERIC Educational Resources Information Center

    Rogness, Jonathan

    2011-01-01

    Advances in computer graphics have provided mathematicians with the ability to create stunning visualizations, both to gain insight and to help demonstrate the beauty of mathematics to others. As educators these tools can be particularly important as we search for ways to work with students raised with constant visual stimulation, from video games…

  18. Visual Knowledge.

    ERIC Educational Resources Information Center

    Chipman, Susan F.

    Visual knowledge is an enormously important part of our total knowledge. The psychological study of learning and knowledge has focused almost exclusively on verbal materials. Today, the advance of technology is making the use of visual communication increasingly feasible and popular. However, this enthusiasm involves the illusion that visual…

  19. Visual Theorems.

    ERIC Educational Resources Information Center

    Davis, Philip J.

    1993-01-01

    Argues for a mathematics education that interprets the word "theorem" in a sense that is wide enough to include the visual aspects of mathematical intuition and reasoning. Defines the term "visual theorems" and illustrates the concept using the Marigold of Theodorus. (Author/MDH)

  20. Visual Thinking.

    ERIC Educational Resources Information Center

    Arnheim, Rudolf

    Based on the more general principle that all thinking (including reasoning) is basically perceptual in nature, the author proposes that visual perception is not a passive recording of stimulus material but an active concern of the mind. He delineates the task of visually distinguishing changes in size, shape, and position and points out the…

  1. The development of visual- and auditory processing in Rett syndrome: an ERP study.

    PubMed

    Stauder, Johannes E A; Smeets, Eric E J; van Mil, Saskia G M; Curfs, Leopold G M

    2006-09-01

    Rett syndrome is a neurodevelopmental disorder that occurs almost exclusively in females. It is characterized by a progressive loss of intellectual functioning and motor skills, and the development of stereotypic hand movements, that occur after a period of normal development. Event-related potentials were recorded to a passive auditory- and visual oddball task in 17 females with Rett syndrome aged between 2 and 60 years, and age-matched controls. Overall the participants with Rett syndrome had longer ERP latencies and smaller ERP amplitudes than the Control group suggesting slowed information processing and reduced brain activation. The Rett groups also failed to show typical developmental changes in event-related brain activity and revealed a marked decline in ERP task modulation with increasing age.

  2. Visual cognition

    PubMed Central

    Cavanagh, Patrick

    2011-01-01

    Visual cognition, high-level vision, mid-level vision and top-down processing all refer to decision-based scene analyses that combine prior knowledge with retinal input to generate representations. The label “visual cognition” is little used at present, but research and experiments on mid- and high-level, inference-based vision have flourished, becoming in the 21st century a significant, if often understated part, of current vision research. How does visual cognition work? What are its moving parts? This paper reviews the origins and architecture of visual cognition and briefly describes some work in the areas of routines, attention, surfaces, objects, and events (motion, causality, and agency). Most vision scientists avoid being too explicit when presenting concepts about visual cognition, having learned that explicit models invite easy criticism. What we see in the literature is ample evidence for visual cognition, but few or only cautious attempts to detail how it might work. This is the great unfinished business of vision research: at some point we will be done with characterizing how the visual system measures the world and we will have to return to the question of how vision constructs models of objects, surfaces, scenes, and events. PMID:21329719

  3. Action Properties of Object Images Facilitate Visual Search.

    PubMed

    Gomez, Michael A; Snow, Jacqueline C

    2017-03-06

    There is mounting evidence that constraints from action can influence the early stages of object selection, even in the absence of any explicit preparation for action. Here, we examined whether action properties of images can influence visual search, and whether such effects were modulated by hand preference. Observers searched for an oddball target among 3 distractors. The search arrays consisted either of images of graspable "handles" ("action-related" stimuli), or images that were otherwise identical to the handles but in which the semicircular fulcrum element was reoriented so that the stimuli no longer looked like graspable objects ("non-action-related" stimuli). In Experiment 1, right-handed observers, who have been shown previously to prefer to use the right hand over the left for manual tasks, were faster to detect targets in action-related versus non-action-related arrays, and showed a response time (reaction time [RT]) advantage for rightward- versus leftward-oriented action-related handles. In Experiment 2, left-handed observers, who have been shown to use the left and right hands relatively equally in manual tasks, were also faster to detect targets in the action-related versus non-action-related arrays, but RTs were equally fast for rightward- and leftward-oriented handle targets. Together, or results suggest that action properties in images, and constraints for action imposed by preferences for manual interaction with objects, can influence attentional selection in the context of visual search. (PsycINFO Database Record

  4. Functional connectivity when detecting rare visual targets in schizophrenia.

    PubMed

    Jimenez, Amy M; Lee, Junghee; Green, Michael F; Wynn, Jonathan K

    2017-03-30

    Individuals with schizophrenia demonstrate difficulties in attending to important stimuli (e.g., targets) and ignoring distractors (e.g., non-targets). We used a visual oddball task during fMRI to examine functional connectivity within and between the ventral and dorsal attention networks to determine the relative contribution of each network to detection of rare visual targets in schizophrenia. The sample comprised 25 schizophrenia patients and 27 healthy controls. Psychophysiological interaction analysis was used to examine whole-brain functional connectivity in response to targets. We used the right temporo parietal junction (TPJ) as the seed region for the ventral network and the right medial intraparietal sulcus (IPS) as the seed region for the dorsal network. We found that connectivity between right IPS and right anterior insula (AI; a component of the ventral network) was significantly greater in controls than patients. Expected patterns of within- and between-network connectivity for right TPJ were observed in controls, and not significantly different in patients. These findings indicate functional connectivity deficits between the dorsal and ventral attention networks in schizophrenia that may create problems in processing relevant versus irrelevant stimuli. Understanding the nature of network disruptions underlying cognitive deficits of schizophrenia may help shed light on the pathophysiology of this disorder.

  5. Orbitofrontal Cortex and the Early Processing of Visual Novelty in Healthy Aging

    PubMed Central

    Kaufman, David A. S.; Keith, Cierra M.; Perlstein, William M.

    2016-01-01

    Event-related potential (ERP) studies have previously found that scalp topographies of attention-related ERP components show frontal shifts with age, suggesting an increased need for compensatory frontal activity to assist with top-down facilitation of attention. However, the precise neural time course of top-down attentional control in aging is not clear. In this study, 20 young (mean: 22 years) and 14 older (mean: 64 years) adults completed a three-stimulus visual oddball task while high-density ERPs were acquired. Colorful, novel distracters were presented to engage early visual processing. Relative to young controls, older participants exhibited elevations in occipital early posterior positivity (EPP), approximately 100 ms after viewing colorful distracters. Neural source models for older adults implicated unique patterns of orbitofrontal cortex (OFC; BA 11) activity during early visual novelty processing (100 ms), which was positively correlated with subsequent activations in primary visual cortex (BA 17). Older adult EPP amplitudes and OFC activity were associated with performance on tests of complex attention and executive function. These findings are suggestive of age-related, compensatory neural changes that may driven by a combination of weaker cortical efficiency and increased need for top-down control over attention. Accordingly, enhanced early OFC activity during visual attention may serve as an important indicator of frontal lobe integrity in healthy aging. PMID:27199744

  6. Visual Impairment

    MedlinePlus

    ... or head with a baseball or having an automobile or motorcycle accident. Some babies have congenital blindness , ... how well he or she sees at various distances. Visual field test. Ophthalmologists use this test to ...

  7. Visual cognition

    SciTech Connect

    Pinker, S.

    1985-01-01

    This book consists of essays covering issues in visual cognition presenting experimental techniques from cognitive psychology, methods of modeling cognitive processes on computers from artificial intelligence, and methods of studying brain organization from neuropsychology. Topics considered include: parts of recognition; visual routines; upward direction; mental rotation, and discrimination of left and right turns in maps; individual differences in mental imagery, computational analysis and the neurological basis of mental imagery: componental analysis.

  8. Is P3 a strategic or a tactical component? Relationships of P3 sub-components to response times in oddball tasks with go, no-go and choice responses.

    PubMed

    Verleger, Rolf; Grauhan, Nils; Śmigasiewicz, Kamila

    2016-12-01

    P3 (viz. P300) is a most prominent component of event-related EEG potentials recorded during task performance. There has been long-standing debate about whether the process reflected by P3 is tactical or strategic, i.e., required for making the present response or constituting some overarching process. Here, we used residue iteration decomposition (RIDE) to delineate P3 subcomponents time-locked to responses and tested for the temporal relations between P3 components and response times (RTs). Data were obtained in oddball tasks (i.e., tasks presenting two stimuli, one rarely and one frequently) with rare and frequent go, no-go, or choice responses (CRs). As usual, rare-go P3s were large at Pz and rare no-go P3s at FCz. Notably, P3s evoked with rare CRs were large at either site, probably comprising both go and no-go P3. Throughout, with frequent and rare responses, P3 latencies coincided with RTs. RIDE decomposed P3 complexes into a large CPz-focused C-P3 and an earlier Pz-focused response-locked R-P3. R-P3 had an additional large fronto-central focus with rare CRs, modeling the no-go-P3 part, suggesting that the process reflected by no-go P3 is tightly time-locked to making the alternative response. R-P3 coincided with the fast RTs to frequent stimuli and C-P3 coincided with the slower RTs to rare stimuli. Thus, the process reflected by C-P3 might be required for responding to rare events, but not to frequent ones. We argue that these results are nevertheless compatible with a tactical role of P3 because responses may not be contingent on stimulus analysis with frequent stimuli.

  9. Visual Prosthesis

    PubMed Central

    Schiller, Peter H.; Tehovnik, Edward J.

    2009-01-01

    There are more than 40 million blind individuals in the world whose plight would be greatly ameliorated by creating a visual prosthetic. We begin by outlining the basic operational characteristics of the visual system as this knowledge is essential for producing a prosthetic device based on electrical stimulation through arrays of implanted electrodes. We then list a series of tenets that we believe need to be followed in this effort. Central among these is our belief that the initial research in this area, which is in its infancy, should first be carried out in animals. We suggest that implantation of area V1 holds high promise as the area is of a large volume and can therefore accommodate extensive electrode arrays. We then proceed to consider coding operations that can effectively convert visual images viewed by a camera to stimulate electrode arrays to yield visual impressions that can provide shape, motion and depth information. We advocate experimental work that mimics electrical stimulation effects non-invasively in sighted human subjects using a camera from which visual images are converted into displays on a monitor akin to those created by electrical stimulation. PMID:19065857

  10. Visual cognition

    SciTech Connect

    Pinker, S.

    1985-01-01

    This collection of research papers on visual cognition first appeared as a special issue of Cognition: International Journal of Cognitive Science. The study of visual cognition has seen enormous progress in the past decade, bringing important advances in our understanding of shape perception, visual imagery, and mental maps. Many of these discoveries are the result of converging investigations in different areas, such as cognitive and perceptual psychology, artificial intelligence, and neuropsychology. This volume is intended to highlight a sample of work at the cutting edge of this research area for the benefit of students and researchers in a variety of disciplines. The tutorial introduction that begins the volume is designed to help the nonspecialist reader bridge the gap between the contemporary research reported here and earlier textbook introductions or literature reviews.

  11. Ignition's glow: Ultra-fast spread of global cortical activity accompanying local "ignitions" in visual cortex during conscious visual perception.

    PubMed

    Noy, N; Bickel, S; Zion-Golumbic, E; Harel, M; Golan, T; Davidesco, I; Schevon, C A; McKhann, G M; Goodman, R R; Schroeder, C E; Mehta, A D; Malach, R

    2015-09-01

    Despite extensive research, the spatiotemporal span of neuronal activations associated with the emergence of a conscious percept is still debated. The debate can be formulated in the context of local vs. global models, emphasizing local activity in visual cortex vs. a global fronto-parietal "workspace" as the key mechanisms of conscious visual perception. These alternative models lead to differential predictions with regard to the precise magnitude, timing and anatomical spread of neuronal activity during conscious perception. Here we aimed to test a specific aspect of these predictions in which local and global models appear to differ - namely the extent to which fronto-parietal regions modulate their activity during task performance under similar perceptual states. So far the main experimental results relevant to this debate have been obtained from non-invasive methods and led to conflicting interpretations. Here we examined these alternative predictions through large-scale intracranial measurements (Electrocorticogram - ECoG) in 43 patients and 4445 recording sites. Both ERP and broadband high frequency (50-150 Hz - BHF) responses were examined through the entire cortex during a simple 1-back visual recognition memory task. Our results reveal short latency intense visual responses, localized first in early visual cortex followed (at ∼200 ms) by higher order visual areas, but failed to show significant delayed (300 ms) visual activations. By contrast, oddball image repeat events, linked to overt motor responses, were associated with a significant increase in a delayed (300 ms) peak of BHF power in fronto-parietal cortex. Comparing BHF responses with ERP revealed an additional peak in the ERP response - having a similar latency to the well-studied P3 scalp EEG response. Posterior and temporal regions demonstrated robust visual category selectivity. An unexpected observation was that high-order visual cortex responses were essentially concurrent (at ∼200 ms

  12. Interactions between P300 and passive probe responses differ in different visual cortical areas.

    PubMed

    Michalski, A

    2001-01-01

    The regulation of firing thresholds of cortical neurons was suggested as one of the mechanisms underlying the generation of the P300 component in the human event-related potential. According to this hypothesis, the detection of an important stimulus produced the widespread inhibition of "irrelevant" networks, interrupting their ongoing activity and facilitating the analysis of selected information. In the present experiment, the responsiveness of visual cortex was evaluated during the P300 potential by using additional, probing stimuli. Large separation of the cortical visual fields permitted separate analysis of the input and more advanced stages of processing. Responses were recorded from Fz, Cz, Pz and Oz scalp sites. P300 waves were evoked by visual, mentally counted stimuli in a standard "odd-ball" procedure. Visual probes were delivered 200, 300, 400, 500, 700 and 1000 ms later. No responses to the probes were required. Significant suppression of responses to the probes delivered less than 400 ms after target stimuli was found in Oz and Pz but not in Cz or Fz. The suppression was not proportional to the voltage levels from which probe responses started. In Fz and Cz, latencies of probe responses were elongated if probes were delivered less than 400 ms after target stimuli. The results suggest that probe responses suppressed by the P300 potential in occipital and parietal cortex may be restored in frontal areas. In these areas the P300 potential could delay probe responses instead of suppressing them.

  13. Visualizing inequality

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2016-07-01

    The study of socioeconomic inequality is of substantial importance, scientific and general alike. The graphic visualization of inequality is commonly conveyed by Lorenz curves. While Lorenz curves are a highly effective statistical tool for quantifying the distribution of wealth in human societies, they are less effective a tool for the visual depiction of socioeconomic inequality. This paper introduces an alternative to Lorenz curves-the hill curves. On the one hand, the hill curves are a potent scientific tool: they provide detailed scans of the rich-poor gaps in human societies under consideration, and are capable of accommodating infinitely many degrees of freedom. On the other hand, the hill curves are a powerful infographic tool: they visualize inequality in a most vivid and tangible way, with no quantitative skills that are required in order to grasp the visualization. The application of hill curves extends far beyond socioeconomic inequality. Indeed, the hill curves are highly effective 'hyperspectral' measures of statistical variability that are applicable in the context of size distributions at large. This paper establishes the notion of hill curves, analyzes them, and describes their application in the context of general size distributions.

  14. Visual mismatch negativity to masked stimuli presented at very brief presentation rates.

    PubMed

    Flynn, Maria; Liasis, Alki; Gardner, Mark; Towell, Tony

    2017-02-01

    Mismatch negativity (MMN) has been characterised as a 'pre-attentive' component of an event-related potential (ERP) that is related to discrimination and error prediction processes. The aim of the current experiment was to establish whether visual MMN could be recorded to briefly presented, backward and forward masked visual stimuli, given both below and above levels of subjective experience. Evidence of visual MMN elicitation in the absence of the ability to consciously report stimuli would provide strong evidence for the automaticity of the visual MMN mechanism. Using an oddball paradigm, two stimuli that differed in orientation from each other, a + and an ×, were presented on a computer screen. Electroencephalogram (EEG) was recorded from nine participants (six females), mean age 21.4 years. Results showed that for stimuli that were effectively masked at 7 ms presentation, there was little variation in the ERPs evoked to standard and deviant stimuli or in the subtraction waveform employed to delineate the visual MMN. At 14 ms stimulus presentation, when participants were able to report stimulus presence, an enhanced negativity at around 175 and 305 ms was observed to the deviant and was evident in the subtraction waveform. However, some of the difference observed in the ERPs can be attributed to stimulus characteristics, as the use of a 'lonely' deviant protocol revealed attenuated visual MMN components at 14 ms stimulus presentation. Overall, results suggest that some degree of conscious attention is required before visual MMN components emerge, suggesting visual MMN is not an entirely pre-attentive process.

  15. Auditory and Visual Electrophysiology of Deaf Children with Cochlear Implants: Implications for Cross-modal Plasticity.

    PubMed

    Corina, David P; Blau, Shane; LaMarr, Todd; Lawyer, Laurel A; Coffey-Corina, Sharon

    2017-01-01

    Deaf children who receive a cochlear implant early in life and engage in intensive oral/aural therapy often make great strides in spoken language acquisition. However, despite clinicians' best efforts, there is a great deal of variability in language outcomes. One concern is that cortical regions which normally support auditory processing may become reorganized for visual function, leaving fewer available resources for auditory language acquisition. The conditions under which these changes occur are not well understood, but we may begin investigating this phenomenon by looking for interactions between auditory and visual evoked cortical potentials in deaf children. If children with abnormal auditory responses show increased sensitivity to visual stimuli, this may indicate the presence of maladaptive cortical plasticity. We recorded evoked potentials, using both auditory and visual paradigms, from 25 typical hearing children and 26 deaf children (ages 2-8 years) with cochlear implants. An auditory oddball paradigm was used (85% /ba/ syllables vs. 15% frequency modulated tone sweeps) to elicit an auditory P1 component. Visual evoked potentials (VEPs) were recorded during presentation of an intermittent peripheral radial checkerboard while children watched a silent cartoon, eliciting a P1-N1 response. We observed reduced auditory P1 amplitudes and a lack of latency shift associated with normative aging in our deaf sample. We also observed shorter latencies in N1 VEPs to visual stimulus offset in deaf participants. While these data demonstrate cortical changes associated with auditory deprivation, we did not find evidence for a relationship between cortical auditory evoked potentials and the VEPs. This is consistent with descriptions of intra-modal plasticity within visual systems of deaf children, but do not provide evidence for cross-modal plasticity. In addition, we note that sign language experience had no effect on deaf children's early auditory and visual ERP

  16. Auditory and Visual Electrophysiology of Deaf Children with Cochlear Implants: Implications for Cross-modal Plasticity

    PubMed Central

    Corina, David P.; Blau, Shane; LaMarr, Todd; Lawyer, Laurel A.; Coffey-Corina, Sharon

    2017-01-01

    Deaf children who receive a cochlear implant early in life and engage in intensive oral/aural therapy often make great strides in spoken language acquisition. However, despite clinicians’ best efforts, there is a great deal of variability in language outcomes. One concern is that cortical regions which normally support auditory processing may become reorganized for visual function, leaving fewer available resources for auditory language acquisition. The conditions under which these changes occur are not well understood, but we may begin investigating this phenomenon by looking for interactions between auditory and visual evoked cortical potentials in deaf children. If children with abnormal auditory responses show increased sensitivity to visual stimuli, this may indicate the presence of maladaptive cortical plasticity. We recorded evoked potentials, using both auditory and visual paradigms, from 25 typical hearing children and 26 deaf children (ages 2–8 years) with cochlear implants. An auditory oddball paradigm was used (85% /ba/ syllables vs. 15% frequency modulated tone sweeps) to elicit an auditory P1 component. Visual evoked potentials (VEPs) were recorded during presentation of an intermittent peripheral radial checkerboard while children watched a silent cartoon, eliciting a P1–N1 response. We observed reduced auditory P1 amplitudes and a lack of latency shift associated with normative aging in our deaf sample. We also observed shorter latencies in N1 VEPs to visual stimulus offset in deaf participants. While these data demonstrate cortical changes associated with auditory deprivation, we did not find evidence for a relationship between cortical auditory evoked potentials and the VEPs. This is consistent with descriptions of intra-modal plasticity within visual systems of deaf children, but do not provide evidence for cross-modal plasticity. In addition, we note that sign language experience had no effect on deaf children’s early auditory and visual

  17. Visual Task Demands and the Auditory Mismatch Negativity: An Empirical Study and a Meta-Analysis

    PubMed Central

    Wiens, Stefan; Szychowska, Malina; Nilsson, Mats E.

    2016-01-01

    Because the auditory system is particularly useful in monitoring the environment, previous research has examined whether task-irrelevant, auditory distracters are processed even if subjects focus their attention on visual stimuli. This research suggests that attentionally demanding visual tasks decrease the auditory mismatch negativity (MMN) to simultaneously presented auditory distractors. Because a recent behavioral study found that high visual perceptual load decreased detection sensitivity of simultaneous tones, we used a similar task (n = 28) to determine if high visual perceptual load would reduce the auditory MMN. Results suggested that perceptual load did not decrease the MMN. At face value, these nonsignificant findings may suggest that effects of perceptual load on the MMN are smaller than those of other demanding visual tasks. If so, effect sizes should differ systematically between the present and previous studies. We conducted a selective meta-analysis of published studies in which the MMN was derived from the EEG, the visual task demands were continuous and varied between high and low within the same task, and the task-irrelevant tones were presented in a typical oddball paradigm simultaneously with the visual stimuli. Because the meta-analysis suggested that the present (null) findings did not differ systematically from previous findings, the available evidence was combined. Results of this meta-analysis confirmed that demanding visual tasks reduce the MMN to auditory distracters. However, because the meta-analysis was based on small studies and because of the risk for publication biases, future studies should be preregistered with large samples (n > 150) to provide confirmatory evidence for the results of the present meta-analysis. These future studies should also use control conditions that reduce confounding effects of neural adaptation, and use load manipulations that are defined independently from their effects on the MMN. PMID:26741815

  18. Visual processing affects the neural basis of auditory discrimination.

    PubMed

    Kislyuk, Daniel S; Möttönen, Riikka; Sams, Mikko

    2008-12-01

    The interaction between auditory and visual speech streams is a seamless and surprisingly effective process. An intriguing example is the "McGurk effect": The acoustic syllable /ba/ presented simultaneously with a mouth articulating /ga/ is typically heard as /da/ [McGurk, H., & MacDonald, J. Hearing lips and seeing voices. Nature, 264, 746-748, 1976]. Previous studies have demonstrated the interaction of auditory and visual streams at the auditory cortex level, but the importance of these interactions for the qualitative perception change remained unclear because the change could result from interactions at higher processing levels as well. In our electroencephalogram experiment, we combined the McGurk effect with mismatch negativity (MMN), a response that is elicited in the auditory cortex at a latency of 100-250 msec by any above-threshold change in a sequence of repetitive sounds. An "odd-ball" sequence of acoustic stimuli consisting of frequent /va/ syllables (standards) and infrequent /ba/ syllables (deviants) was presented to 11 participants. Deviant stimuli in the unisensory acoustic stimulus sequence elicited a typical MMN, reflecting discrimination of acoustic features in the auditory cortex. When the acoustic stimuli were dubbed onto a video of a mouth constantly articulating /va/, the deviant acoustic /ba/ was heard as /va/ due to the McGurk effect and was indistinguishable from the standards. Importantly, such deviants did not elicit MMN, indicating that the auditory cortex failed to discriminate between the acoustic stimuli. Our findings show that visual stream can qualitatively change the auditory percept at the auditory cortex level, profoundly influencing the auditory cortex mechanisms underlying early sound discrimination.

  19. Camouflage Visualization

    DTIC Science & Technology

    1992-04-07

    results of the test are difficult to quantify and to compare with previous observations. The more recent need to develop camouflage measures to defeat...Number: 609-9160002 2-2 Final Report. Camouflage Visualization System A:)TTRIBU!TES)) EMTY ATmUE INSERT TARGET BACKROUND....... IMG IMMAGE IRSG- RHG...atmospheric attenuation models, such as LOWTRAN. As a reference, the calculated results are compared to LOWTRAN predictions to show the performance of our

  20. Battlefield Visualization

    DTIC Science & Technology

    2007-11-02

    A study analyzing battlefield visualization (BV) as a component of information dominance and superiority. This study outlines basic requirements for effective BV in terms of terrain data, information systems (synthetic environment; COA development and analysis tools) and BV development management, with a focus on technology insertion strategies. This study also reports on existing BV systems and provides 16 recommendations for Army BV support efforts, including interested organization, funding levels and duration of effort for each recommended action.

  1. Visualizing Progress

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Reality Capture Technologies, Inc. is a spinoff company from Ames Research Center. Offering e-business solutions for optimizing management, design and production processes, RCT uses visual collaboration environments (VCEs) such as those used to prepare the Mars Pathfinder mission.The product, 4-D Reality Framework, allows multiple users from different locations to manage and share data. The insurance industry is one targeted commercial application for this technology.

  2. Flow visualization

    NASA Technical Reports Server (NTRS)

    Weinstein, Leonard M.

    1991-01-01

    Flow visualization techniques are reviewed, with particular attention given to those applicable to liquid helium flows. Three techniques capable of obtaining qualitative and quantitative measurements of complex 3D flow fields are discussed including focusing schlieren, particle image volocimetry, and holocinematography (HCV). It is concluded that the HCV appears to be uniquely capable of obtaining full time-varying, 3D velocity field data, but is limited to the low speeds typical of liquid helium facilities.

  3. Flow visualization

    NASA Astrophysics Data System (ADS)

    Weinstein, Leonard M.

    Flow visualization techniques are reviewed, with particular attention given to those applicable to liquid helium flows. Three techniques capable of obtaining qualitative and quantitative measurements of complex 3D flow fields are discussed including focusing schlieren, particle image volocimetry, and holocinematography (HCV). It is concluded that the HCV appears to be uniquely capable of obtaining full time-varying, 3D velocity field data, but is limited to the low speeds typical of liquid helium facilities.

  4. An electrophysiological study of visual processing in spinocerebellar ataxia type 2 (SCA2).

    PubMed

    Kremlacek, Jan; Valis, Martin; Masopust, Jiri; Urban, Ales; Zumrova, Alena; Talab, Radomir; Kuba, Miroslav; Kubova, Zuzana; Langrova, Jana

    2011-03-01

    Reports of visual functional impairment in spinocerebellar ataxia type 2 (SCA2) have been studied previously using pattern reversal visually evoked potentials (VEPs) with contradictory results. To provide additional evidence to this area, visual functions were studied using VEPs and event-related potentials (ERPs) in a group of ten patients with genetically verified SCA2. The electrophysiological examination included pattern reversal and motion-onset VEPs as well as visually driven oddball ERPs with an evaluation of a target and a pre-attentive response. In six patients, we found abnormal visual/cognitive processing that differed from normal values in latency, but not in the amplitude of the dominant VEP/ERP peaks. Among the VEPs/ERPs used, the motion-onset VEPs exhibited the highest sensitivity and showed a strong Spearman correlation to SCA2 duration (from r = 0.82 to r = 0.90, p < 0.001) and clinical state assessed by Brief Ataxia Rating Scale (from r = 0.71 (p = 0.022) to r = 0.80 (p < 0.001)). None of the VEP/ERP latencies showed a correlation to the triplet repeats of the SCA2 gene. In three patients, we did not find any visual/cognitive pathology, and one subject showed only a single subtle prolongation of the VEP peak. The observed visual/cognitive deficit was related to the subjects' clinical state and the illness duration, but no relationship to the genetic marker of SCA2 was found. From the VEP/ERP types used, the motion-onset VEPs seems to be the most promising candidate for clinical state monitoring rather than a tool for early diagnostic use.

  5. When Elderly Outperform Young Adults—Integration in Vision Revealed by the Visual Mismatch Negativity

    PubMed Central

    Gaál, Zsófia Anna; Bodnár, Flóra; Czigler, István

    2017-01-01

    We studied the possibility of age-related differences of visual integration at an automatic and at a task-related level. Data of 15 young (21.9 ± 1.8 years) and 15 older (66.6 ± 3.5 years) women were analyzed in our experiment. Automatic processing was investigated in a passive oddball paradigm, and the visual mismatch negativity (vMMN) of event-related brain potentials was measured. Letters and pseudo-letters were presented either as single characters, or the characters were presented successively in two fragments. In case of simultaneous presentation of the two fragments (whole character) vMMN emerged in both age groups. However, in successive presentation vMMN was elicited only by the deviant pseudo-letters, and only in the older group. The longest stimulus onset asynchrony (SOA) in this group was 50 ms, indicating longer information persistence in elderly. In a psychophysical experiment, the task was to indicate, which member of a character pair was a legal letter. Again, the letters and pseudo-letters were presented as fragments. We obtained successful integration at 30 ms (0 ms interstimulus interval), but not at longer SOAs in both age groups, showing that in case of task-relevant stimulation level there was no detectable age-related performance difference. We interpreted the results as the efficiency of local inhibitory circuits is compromised in elderly, leading to longer stimulus persistence, and hence better visual perception in this particular case. PMID:28197097

  6. Visual bioethics.

    PubMed

    Lauritzen, Paul

    2008-12-01

    Although images are pervasive in public policy debates in bioethics, few who work in the field attend carefully to the way that images function rhetorically. If the use of images is discussed at all, it is usually to dismiss appeals to images as a form of manipulation. Yet it is possible to speak meaningfully of visual arguments. Examining the appeal to images of the embryo and fetus in debates about abortion and stem cell research, I suggest that bioethicists would be well served by attending much more carefully to how images function in public policy debates.

  7. Visualization rhetoric: framing effects in narrative visualization.

    PubMed

    Hullman, Jessica; Diakopoulos, Nicholas

    2011-12-01

    Narrative visualizations combine conventions of communicative and exploratory information visualization to convey an intended story. We demonstrate visualization rhetoric as an analytical framework for understanding how design techniques that prioritize particular interpretations in visualizations that "tell a story" can significantly affect end-user interpretation. We draw a parallel between narrative visualization interpretation and evidence from framing studies in political messaging, decision-making, and literary studies. Devices for understanding the rhetorical nature of narrative information visualizations are presented, informed by the rigorous application of concepts from critical theory, semiotics, journalism, and political theory. We draw attention to how design tactics represent additions or omissions of information at various levels-the data, visual representation, textual annotations, and interactivity-and how visualizations denote and connote phenomena with reference to unstated viewing conventions and codes. Classes of rhetorical techniques identified via a systematic analysis of recent narrative visualizations are presented, and characterized according to their rhetorical contribution to the visualization. We describe how designers and researchers can benefit from the potentially positive aspects of visualization rhetoric in designing engaging, layered narrative visualizations and how our framework can shed light on how a visualization design prioritizes specific interpretations. We identify areas where future inquiry into visualization rhetoric can improve understanding of visualization interpretation.

  8. Trench Visualization

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image shows oblique views of NASA's Phoenix Mars Lander's trench visualized using the NASA Ames Viz software package that allows interactive movement around terrain and measurement of features. The Surface Stereo Imager images are used to create a digital elevation model of the terrain. The trench is 1.5 inches deep. The top image was taken on the seventh Martian day of the mission, or Sol 7 (June 1, 2008). The bottom image was taken on the ninth Martian day of the mission, or Sol 9 (June 3, 2008).

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  9. Why Teach Visual Culture?

    ERIC Educational Resources Information Center

    Passmore, Kaye

    2007-01-01

    Visual culture is a hot topic in art education right now as some teachers are dedicated to teaching it and others are adamant that it has no place in a traditional art class. Visual culture, the author asserts, can include just about anything that is visually represented. Although people often think of visual culture as contemporary visuals such…

  10. Cortical Visual Impairment

    MedlinePlus

    ... Frequently Asked Questions Español Condiciones Chinese Conditions Cortical Visual Impairment En Español Read in Chinese What is cortical visual impairment? Cortical visual impairment (CVI) is a decreased ...

  11. Learning Visualizations by Analogy: Promoting Visual Literacy through Visualization Morphing.

    PubMed

    Ruchikachorn, Puripant; Mueller, Klaus

    2015-09-01

    We propose the concept of teaching (and learning) unfamiliar visualizations by analogy, that is, demonstrating an unfamiliar visualization method by linking it to another more familiar one, where the in-betweens are designed to bridge the gap of these two visualizations and explain the difference in a gradual manner. As opposed to a textual description, our morphing explains an unfamiliar visualization through purely visual means. We demonstrate our idea by ways of four visualization pair examples: data table and parallel coordinates, scatterplot matrix and hyperbox, linear chart and spiral chart, and hierarchical pie chart and treemap. The analogy is commutative i.e. any member of the pair can be the unfamiliar visualization. A series of studies showed that this new paradigm can be an effective teaching tool. The participants could understand the unfamiliar visualization methods in all of the four pairs either fully or at least significantly better after they observed or interacted with the transitions from the familiar counterpart. The four examples suggest how helpful visualization pairings be identified and they will hopefully inspire other visualization morphings and associated transition strategies to be identified.

  12. Beyond Visual Communication Technology.

    ERIC Educational Resources Information Center

    Bell, Thomas P.

    1993-01-01

    Discusses various aspects of visual communication--light, semiotics, codes, photography, typography, and visual literacy--within the context of the communications technology area of technology education. (SK)

  13. Snowflake Visualization

    NASA Astrophysics Data System (ADS)

    Bliven, L. F.; Kucera, P. A.; Rodriguez, P.

    2010-12-01

    NASA Snowflake Video Imagers (SVIs) enable snowflake visualization at diverse field sites. The natural variability of frozen precipitation is a complicating factor for remote sensing retrievals in high latitude regions. Particle classification is important for understanding snow/ice physics, remote sensing polarimetry, bulk radiative properties, surface emissivity, and ultimately, precipitation rates and accumulations. Yet intermittent storms, low temperatures, high winds, remote locations and complex terrain can impede us from observing falling snow in situ. SVI hardware and software have some special features. The standard camera and optics yield 8-bit gray-scale images with resolution of 0.05 x 0.1 mm, at 60 frames per second. Gray-scale images are highly desirable because they display contrast that aids particle classification. Black and white (1-bit) systems display no contrast, so there is less information to recognize particle types, which is particularly burdensome for aggregates. Data are analyzed at one-minute intervals using NASA's Precipitation Link Software that produces (a) Particle Catalogs and (b) Particle Size Distributions (PSDs). SVIs can operate nearly continuously for long periods (e.g., an entire winter season), so natural variability can be documented. Let’s summarize results from field studies this past winter and review some recent SVI enhancements. During the winter of 2009-2010, SVIs were deployed at two sites. One SVI supported weather observations during the 2010 Winter Olympics and Paralympics. It was located close to the summit (Roundhouse) of Whistler Mountain, near the town of Whistler, British Columbia, Canada. In addition, two SVIs were located at the King City Weather Radar Station (WKR) near Toronto, Ontario, Canada. Access was prohibited to the SVI on Whistler Mountain during the Olympics due to security concerns. So to meet the schedule for daily data products, we operated the SVI by remote control. We also upgraded the

  14. Spelling: A Visual Skill.

    ERIC Educational Resources Information Center

    Hendrickson, Homer

    1988-01-01

    Spelling problems arise due to problems with form discrimination and inadequate visualization. A child's sequence of visual development involves learning motor control and coordination, with vision directing and monitoring the movements; learning visual comparison of size, shape, directionality, and solidity; developing visual memory or recall;…

  15. Declarative Visualization Queries

    NASA Astrophysics Data System (ADS)

    Pinheiro da Silva, P.; Del Rio, N.; Leptoukh, G. G.

    2011-12-01

    In an ideal interaction with machines, scientists may prefer to write declarative queries saying "what" they want from a machine than to write code stating "how" the machine is going to address the user request. For example, in relational database, users have long relied on specifying queries using Structured Query Language (SQL), a declarative language to request data results from a database management system. In the context of visualizations, we see that users are still writing code based on complex visualization toolkit APIs. With the goal of improving the scientists' experience of using visualization technology, we have applied this query-answering pattern to a visualization setting, where scientists specify what visualizations they want generated using a declarative SQL-like notation. A knowledge enhanced management system ingests the query and knows the following: (1) know how to translate the query into visualization pipelines; and (2) how to execute the visualization pipelines to generate the requested visualization. We define visualization queries as declarative requests for visualizations specified in an SQL like language. Visualization queries specify what category of visualization to generate (e.g., volumes, contours, surfaces) as well as associated display attributes (e.g., color and opacity), without any regards for implementation, thus allowing scientists to remain partially unaware of a wide range of visualization toolkit (e.g., Generic Mapping Tools and Visualization Toolkit) specific implementation details. Implementation details are only a concern for our knowledge-based visualization management system, which uses both the information specified in the query and knowledge about visualization toolkit functions to construct visualization pipelines. Knowledge about the use of visualization toolkits includes what data formats the toolkit operates on, what formats they output, and what views they can generate. Visualization knowledge, which is not

  16. The Visual Analysis of Visual Metaphor.

    ERIC Educational Resources Information Center

    Dake, Dennis M.; Roberts, Brian

    This paper presents an approach to understanding visual metaphor which uses metaphoric analysis and comprehension by graphic and pictorial means. The perceptible qualities of shape, line, form, color, and texture, that make up the visual structure characteristic of any particular shape, configuration, or scene, are called physiognomic properties;…

  17. Visualizer cognitive style enhances visual creativity.

    PubMed

    Palmiero, Massimiliano; Nori, Raffaella; Piccardi, Laura

    2016-02-26

    In the last two decades, interest towards creativity has increased significantly since it was recognized as a skill and as a cognitive reserve and is now always more frequently used in ageing training. Here, the relationships between visual creativity and Visualization-Verbalization cognitive style were investigated. Fifty college students were administered the Creative Synthesis Task aimed at measuring the ability to construct creative objects and the Visualization-Verbalization Questionnaire (VVQ) aimed at measuring the attitude to preferentially use either imagery or verbal strategy while processing information. Analyses showed that only the originality score of inventions was positively predicted by the VVQ score: higher VVQ score (indicating the preference to use imagery) predicted originality of inventions. These results showed that the visualization strategy is involved especially in the originality dimension of creative objects production. In light of neuroimaging results, the possibility that different strategies, such those that involve motor processes, affect visual creativity is also discussed.

  18. Visual acuity test

    MedlinePlus

    ... this page: //medlineplus.gov/ency/article/003396.htm Visual acuity test To use the sharing features on this page, please enable JavaScript. The visual acuity test is used to determine the smallest ...

  19. Topological Methods for Visualization

    SciTech Connect

    Berres, Anne Sabine

    2016-04-07

    This slide presentation describes basic topological concepts, including topological spaces, homeomorphisms, homotopy, betti numbers. Scalar field topology explores finding topological features and scalar field visualization, and vector field topology explores finding topological features and vector field visualization.

  20. Visualizing Knowledge Domains.

    ERIC Educational Resources Information Center

    Borner, Katy; Chen, Chaomei; Boyack, Kevin W.

    2003-01-01

    Reviews visualization techniques for scientific disciplines and information retrieval and classification. Highlights include historical background of scientometrics, bibliometrics, and citation analysis; map generation; process flow of visualizing knowledge domains; measures and similarity calculations; vector space model; factor analysis;…

  1. Recording visual evoked potentials and auditory evoked P300 at 9.4T static magnetic field.

    PubMed

    Arrubla, Jorge; Neuner, Irene; Hahn, David; Boers, Frank; Shah, N Jon

    2013-01-01

    Simultaneous recording of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) has shown a number of advantages that make this multimodal technique superior to fMRI alone. The feasibility of recording EEG at ultra-high static magnetic field up to 9.4 T was recently demonstrated and promises to be implemented soon in fMRI studies at ultra high magnetic fields. Recording visual evoked potentials are expected to be amongst the most simple for simultaneous EEG/fMRI at ultra-high magnetic field due to the easy assessment of the visual cortex. Auditory evoked P300 measurements are of interest since it is believed that they represent the earliest stage of cognitive processing. In this study, we investigate the feasibility of recording visual evoked potentials and auditory evoked P300 in a 9.4 T static magnetic field. For this purpose, EEG data were recorded from 26 healthy volunteers inside a 9.4 T MR scanner using a 32-channel MR compatible EEG system. Visual stimulation and auditory oddball paradigm were presented in order to elicit evoked related potentials (ERP). Recordings made outside the scanner were performed using the same stimuli and EEG system for comparison purposes. We were able to retrieve visual P100 and auditory P300 evoked potentials at 9.4 T static magnetic field after correction of the ballistocardiogram artefact using independent component analysis. The latencies of the ERPs recorded at 9.4 T were not different from those recorded at 0 T. The amplitudes of ERPs were higher at 9.4 T when compared to recordings at 0 T. Nevertheless, it seems that the increased amplitudes of the ERPs are due to the effect of the ultra-high field on the EEG recording system rather than alteration in the intrinsic processes that generate the electrophysiological responses.

  2. The influence of Mozart's sonata K.448 on visual attention: an ERPs study.

    PubMed

    Zhu, Weina; Zhao, Lun; Zhang, Junjun; Ding, Xiaojun; Liu, Haiwei; Ni, Enzhi; Ma, Yuanye; Zhou, Changle

    2008-03-21

    In the present study, the effects of Mozart's sonata K.448 on voluntary and involuntary attention were investigated by recording and analyzing behavioral and event-related potentials (ERPs) data in a three-stimulus visual oddball task. P3a (related to involuntary attention) and P3b (related to voluntary attention) were analyzed. The "Mozart effect" was showed on ERP but not on behavioral data. This study replicated the previous results of Mozart effect on voluntary attention: the P3b latency was influenced by Mozart's sonata K.448. But no change of P3a latency was induced by this music. At the same time, decreased P3a and P3b amplitudes in music condition were found. We interpret this change as positive "Mozart effect" on involuntary attention (P3a) and negative "Mozart effect" on voluntary attention (P3b). We conclude that Mozart's sonata K.448 has shown certain effects on both involuntary attention and voluntary attention in our study, but their effects work on different mechanisms.

  3. Distraction in a visual multi-deviant paradigm: behavioral and event-related potential effects.

    PubMed

    Grimm, Sabine; Bendixen, Alexandra; Deouell, Leon Y; Schröger, Erich

    2009-06-01

    The present study aimed at investigating visual distraction in a serial, multi-deviant oddball paradigm with deviant stimuli occurring regularly (every third trial), having a larger overall probability (1/3), and low dimension-specific probability (1/9). Participants performed a categorization task (odd/even) on centrally presented digits. Task-irrelevant geometrical forms were presented concurrently in the left and right periphery of the target. These forms were green triangles that, in every third trial, contained a deviancy either in location, color, or shape at the left or right peripheral position. Behavioral performance and event-related potentials (ERPs) were measured during the multi-deviant blocks and during corresponding control blocks to compensate for physical differences. Results revealed prolonged reaction times for the categorization task in trials containing a deviant feature relative to the respective control condition. Furthermore, two negative shifts were observed in the ERPs for deviant compared to control stimuli, the early one at the ascending part of the N1 component, and the later one at the onset latency of the N2 component. Deviant displays violating a sequential regularity on one of the dimensions thus elicit respective posterior ERP components of change detection and a deterioration in task performance even when they occur as frequently as in every third trial of a sequence. In analogy to findings in audition, these results reveal the importance of regularity processing and its immediate consequences for adaptive behavior also in vision.

  4. Automatic Change Detection to Facial Expressions in Adolescents: Evidence from Visual Mismatch Negativity Responses

    PubMed Central

    Liu, Tongran; Xiao, Tong; Shi, Jiannong

    2016-01-01

    Adolescence is a critical period for the neurodevelopment of social-emotional processing, wherein the automatic detection of changes in facial expressions is crucial for the development of interpersonal communication. Two groups of participants (an adolescent group and an adult group) were recruited to complete an emotional oddball task featuring on happy and one fearful condition. The measurement of event-related potential was carried out via electroencephalography and electrooculography recording, to detect visual mismatch negativity (vMMN) with regard to the automatic detection of changes in facial expressions between the two age groups. The current findings demonstrated that the adolescent group featured more negative vMMN amplitudes than the adult group in the fronto-central region during the 120–200 ms interval. During the time window of 370–450 ms, only the adult group showed better automatic processing on fearful faces than happy faces. The present study indicated that adolescent’s posses stronger automatic detection of changes in emotional expression relative to adults, and sheds light on the neurodevelopment of automatic processes concerning social-emotional information. PMID:27065927

  5. Auditory processing under cross-modal visual load investigated with simultaneous EEG-fMRI.

    PubMed

    Regenbogen, Christina; De Vos, Maarten; Debener, Stefan; Turetsky, Bruce I; Mössnang, Carolin; Finkelmeyer, Andreas; Habel, Ute; Neuner, Irene; Kellermann, Thilo

    2012-01-01

    Cognitive task demands in one sensory modality (T1) can have beneficial effects on a secondary task (T2) in a different modality, due to reduced top-down control needed to inhibit the secondary task, as well as crossmodal spread of attention. This contrasts findings of cognitive load compromising a secondary modality's processing. We manipulated cognitive load within one modality (visual) and studied the consequences of cognitive demands on secondary (auditory) processing. 15 healthy participants underwent a simultaneous EEG-fMRI experiment. Data from 8 participants were obtained outside the scanner for validation purposes. The primary task (T1) was to respond to a visual working memory (WM) task with four conditions, while the secondary task (T2) consisted of an auditory oddball stream, which participants were asked to ignore. The fMRI results revealed fronto-parietal WM network activations in response to T1 task manipulation. This was accompanied by significantly higher reaction times and lower hit rates with increasing task difficulty which confirmed successful manipulation of WM load. Amplitudes of auditory evoked potentials, representing fundamental auditory processing showed a continuous augmentation which demonstrated a systematic relation to cross-modal cognitive load. With increasing WM load, primary auditory cortices were increasingly deactivated while psychophysiological interaction results suggested the emergence of auditory cortices connectivity with visual WM regions. These results suggest differential effects of crossmodal attention on fundamental auditory processing. We suggest a continuous allocation of resources to brain regions processing primary tasks when challenging the central executive under high cognitive load.

  6. Near-infrared spectroscopy assessment of divided visual attention task-invoked cerebral hemodynamics during prolonged true driving

    NASA Astrophysics Data System (ADS)

    Li, Ting; Zhao, Yue; Sun, Yunlong; Gao, Yuan; Su, Yu; Hetian, Yiyi; Chen, Min

    2015-03-01

    Driver fatigue is one of the leading causes of traffic accidents. It is imperative to develop a technique to monitor fatigue of drivers in real situation. Near-infrared spectroscopy (fNIRS) is now capable of measuring brain functional activity noninvasively in terms of hemodynamic responses sensitively, which shed a light to us that it may be possible to detect fatigue-specified brain functional activity signal. We developed a sensitive, portable and absolute-measure fNIRS, and utilized it to monitor cerebral hemodynamics on car drivers during prolonged true driving. An odd-ball protocol was employed to trigger the drivers' visual divided attention, which is a critical function in safe driving. We found that oxyhemoglobin concentration and blood volume in prefrontal lobe dramatically increased with driving duration (stand for fatigue degree; 2-10 hours), while deoxyhemoglobin concentration increased to the top at 4 hours then decreased slowly. The behavior performance showed clear decrement only after 6 hours. Our study showed the strong potential of fNIRS combined with divided visual attention protocol in driving fatigue degree monitoring. Our findings indicated the fNIRS-measured hemodynamic parameters were more sensitive than behavior performance evaluation.

  7. Realistic and Schematic Visuals.

    ERIC Educational Resources Information Center

    Heuvelman, Ard

    1996-01-01

    A study examined three different visual formats (studio presenter only, realistic visuals, or schematic visuals) of an educational television program. Recognition and recall of the abstract subject matter were measured in 101 adult viewers directly after the program and 3 months later. The schematic version yielded better recall of the program,…

  8. ESnet Visualization Widgets

    SciTech Connect

    Gopal Vaswani, Jon Dugan

    2012-07-01

    The ESnet Visualization widgets are various data visualization widgets for use in web browsers to aid in the visualization of computer networks. In particular the widgets are targetted at displaying timeseries and topology data. They were developed for use in the MyESnet portal but are general enough to be used other places. The widgets are built using the d3.js library.

  9. Semantic Visualization Mapping for Illustrative Volume Visualization

    NASA Astrophysics Data System (ADS)

    Rautek, P.; Bruckner, S.; Gröller, M. E.

    2009-04-01

    Measured and simulated data is usually divided into several meaningful intervals that are relevant to the domain expert. Examples from medicine are the specific semantics for different measuring modalities. A PET scan of a brain measures brain activity. It shows regions of homogeneous activity that are labeled by experts with semantic values such as low brain activity or high brain activity. Diffusion MRI data provides information about the healthiness of tissue regions and is classified by experts with semantic values like healthy, diseased, or necrotic. Medical CT data encode the measured density values in Hounsfield units. Specific intervals of the Hounsfield scale refer to different tissue types like air, soft tissue, bone, contrast enhanced vessels, etc. However, the semantic parameters from expert domains are not necessarily used to describe a mapping between the volume attributes and visual appearance. Volume rendering techniques commonly map attributes of the underlying data on visual appearance via a transfer function. Transfer functions are a powerful tool to achieve various visualization mappings. The specification of transfer functions is a complex task. The user has to have expert knowledge about the underlying rendering technique to achieve the desired results. Especially the specification of higher-dimensional transfer functions is challenging. Common user interfaces provide methods to brush in two dimensions. While brushing is an intuitive method to select regions of interest or to specify features, user interfaces for higher-dimensions are more challenging and often non-intuitive. For seismic data the situation is even more difficult since the data typically consists of many more volumetric attributes than for example medical datasets. Scientific illustrators are experts in conveying information by visual means. They also make use of semantics in a natural way describing visual abstractions such as shading, tone, rendering style, saturation

  10. Visual examination apparatus

    NASA Technical Reports Server (NTRS)

    Haines, R. F.; Fitzgerald, J. W.; Rositano, S. A. (Inventor)

    1973-01-01

    An automated visual examination apparatus for measuring visual sensitivity and mapping blind spot location is described. The apparatus includes a projection system for displaying to a patient a series of visual stimuli, a response switch enabling him to indicate his reaction to the stimuli, and a recording system responsive to both the visual stimuli per se and the patient's response. The recording system provides a correlated permanent record of both stimuli and response from which a substantive and readily apparent visual evaluation can be made.

  11. [Progressive visual agnosia].

    PubMed

    Sugimoto, Azusa; Futamura, Akinori; Kawamura, Mitsuru

    2011-10-01

    Progressive visual agnosia was discovered in the 20th century following the discovery of classical non-progressive visual agnosia. In contrast to the classical type, which is caused by cerebral vascular disease or traumatic injury, progressive visual agnosia is a symptom of neurological degeneration. The condition of progressive visual loss, including visual agnosia, and posterior cerebral atrophy was named posterior cortical atrophy (PCA) by Benson et al. (1988). Progressive visual agnosia is also observed in semantic dementia (SD) and other degenerative diseases, but there is a difference in the subtype of visual agnosia associated with these diseases. Lissauer (1890) classified visual agnosia into apperceptive and associative types, and it in most cases, PCA is associated with the apperceptive type. However, SD patients exhibit symptoms of associative visual agnosia before changing to those of semantic memory disorder. Insights into progressive visual agnosia have helped us understand the visual system and discover how we "perceive" the outer world neuronally, with regard to consciousness. Although PCA is a type of atypical dementia, its diagnosis is important to enable patients to live better lives with appropriate functional support.

  12. Functional Visual Loss

    PubMed Central

    Bruce, Beau B; Newman, Nancy J

    2010-01-01

    Synopsis Neurologists frequently evaluate patients complaining of vision loss, especially when the patient has been examined by an ophthalmologist who has found no ocular disease. A significant proportion of patients presenting to the neurologist with visual complaints will have non-organic or functional visual loss. While there are examination techniques which can aid in the detection and diagnosis of functional visual loss, the frequency with which functional visual loss occurs concomitantly with organic disease warrants substantial caution on the part of the clinician. Furthermore, purely functional visual loss is never a diagnosis of exclusion, and must be supported by positive findings on examination that demonstrate normal visual function. The relationship of true psychological disease and functional visual loss is unclear and most patients respond well to simple reassurance. PMID:20638000

  13. RAVE: Rapid Visualization Environment

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M.; Anderson, Kevin; Simoudis, Avangelos

    1994-01-01

    Visualization is used in the process of analyzing large, multidimensional data sets. However, the selection and creation of visualizations that are appropriate for the characteristics of a particular data set and the satisfaction of the analyst's goals is difficult. The process consists of three tasks that are performed iteratively: generate, test, and refine. The performance of these tasks requires the utilization of several types of domain knowledge that data analysts do not often have. Existing visualization systems and frameworks do not adequately support the performance of these tasks. In this paper we present the RApid Visualization Environment (RAVE), a knowledge-based system that interfaces with commercial visualization frameworks and assists a data analyst in quickly and easily generating, testing, and refining visualizations. RAVE was used for the visualization of in situ measurement data captured by spacecraft.

  14. Analyzing visual signals as visual scenes.

    PubMed

    Allen, William L; Higham, James P

    2013-07-01

    The study of visual signal design is gaining momentum as techniques for studying signals become more sophisticated and more freely available. In this paper we discuss methods for analyzing the color and form of visual signals, for integrating signal components into visual scenes, and for producing visual signal stimuli for use in psychophysical experiments. Our recommended methods aim to be rigorous, detailed, quantitative, objective, and where possible based on the perceptual representation of the intended signal receiver(s). As methods for analyzing signal color and luminance have been outlined in previous publications we focus on analyzing form information by discussing how statistical shape analysis (SSA) methods can be used to analyze signal shape, and spatial filtering to analyze repetitive patterns. We also suggest the use of vector-based approaches for integrating multiple signal components. In our opinion elliptical Fourier analysis (EFA) is the most promising technique for shape quantification but we await the results of empirical comparison of techniques and the development of new shape analysis methods based on the cognitive and perceptual representations of receivers. Our manuscript should serve as an introductory guide to those interested in measuring visual signals, and while our examples focus on primate signals, the methods are applicable to quantifying visual signals in most taxa.

  15. Visual dependence and BPPV.

    PubMed

    Agarwal, K; Bronstein, A M; Faldon, M E; Mandalà, M; Murray, K; Silove, Y

    2012-06-01

    The increased visual dependence noted in some vestibular patients may be secondary to their vertigo. We examine whether a single, brief vertigo attack, such as in benign paroxysmal positional vertigo (BPPV), modifies visual dependency. Visual dependency was measured before and after the Hallpike manoeuvre with (a) the Rod and Frame and the Rod and Disc techniques whilst seated and (b) the postural sway induced by visual roll-motion stimulation. Three subject groups were studied: 20 patients with BPPV (history and positive Hallpike manoeuvre; PosH group), 20 control patients (history of BPPV but negative Hallpike manoeuvre; NegH group) and 20 normal controls. Our findings show that while both patient groups showed enhanced visual dependency, the PosH and the normal control group decreased visual dependency on repetition of the visual tasks after the Hallpike manoeuvre. NegH patients differed from PosH patients in that their high visual dependency did not diminish on repetition of the visual stimuli; they scored higher on the situational characteristic questionnaire ('visual vertigo' symptoms) and showed higher incidence of migraine. We conclude that long term vestibular symptoms increase visual dependence but a single BPPV attack does not increase it further. Repetitive visual motion stimulation induces adaptation in visual dependence in peripheral vestibular disorders such as BPPV. A positional form of vestibular migraine may underlie the symptoms of some patients with a history of BPPV but negative Hallpike manoeuvre. The finding that they have non adaptable increased visual dependency may explain visuo-vestibular symptoms in this group and, perhaps more widely, in patients with migraine.

  16. Touch Accelerates Visual Awareness

    PubMed Central

    Lo Verde, Luca; Alais, David

    2017-01-01

    To efficiently interact with the external environment, our nervous system combines information arising from different sensory modalities. Recent evidence suggests that cross-modal interactions can be automatic and even unconscious, reflecting the ecological relevance of cross-modal processing. Here, we use continuous flash suppression (CFS) to directly investigate whether haptic signals can interact with visual signals outside of visual awareness. We measured suppression durations of visual gratings rendered invisible by CFS either during visual stimulation alone or during visuo-haptic stimulation. We found that active exploration of a haptic grating congruent in orientation with the suppressed visual grating reduced suppression durations both compared with visual-only stimulation and to incongruent visuo-haptic stimulation. We also found that the facilitatory effect of touch on visual suppression disappeared when the visual and haptic gratings were mismatched in either spatial frequency or orientation. Together, these results demonstrate that congruent touch can accelerate the rise to consciousness of a suppressed visual stimulus and that this unconscious cross-modal interaction depends on visuo-haptic congruency. Furthermore, since CFS suppression is thought to occur early in visual cortical processing, our data reinforce the evidence suggesting that visuo-haptic interactions can occur at the earliest stages of cortical processing. PMID:28210486

  17. Attention and visual memory in visualization and computer graphics.

    PubMed

    Healey, Christopher G; Enns, James T

    2012-07-01

    A fundamental goal of visualization is to produce images of data that support visual analysis, exploration, and discovery of novel insights. An important consideration during visualization design is the role of human visual perception. How we "see" details in an image can directly impact a viewer's efficiency and effectiveness. This paper surveys research on attention and visual perception, with a specific focus on results that have direct relevance to visualization and visual analytics. We discuss theories of low-level visual perception, then show how these findings form a foundation for more recent work on visual memory and visual attention. We conclude with a brief overview of how knowledge of visual attention and visual memory is being applied in visualization and graphics. We also discuss how challenges in visualization are motivating research in psychophysics.

  18. Visual field asymmetries in visual evoked responses

    PubMed Central

    Hagler, Donald J.

    2014-01-01

    Behavioral responses to visual stimuli exhibit visual field asymmetries, but cortical folding and the close proximity of visual cortical areas make electrophysiological comparisons between different stimulus locations problematic. Retinotopy-constrained source estimation (RCSE) uses distributed dipole models simultaneously constrained by multiple stimulus locations to provide separation between individual visual areas that is not possible with conventional source estimation methods. Magnetoencephalography and RCSE were used to estimate time courses of activity in V1, V2, V3, and V3A. Responses to left and right hemifield stimuli were not significantly different. Peak latencies for peripheral stimuli were significantly shorter than those for perifoveal stimuli in V1, V2, and V3A, likely related to the greater proportion of magnocellular input to V1 in the periphery. Consistent with previous results, sensor magnitudes for lower field stimuli were about twice as large as for upper field, which is only partially explained by the proximity to sensors for lower field cortical sources in V1, V2, and V3. V3A exhibited both latency and amplitude differences for upper and lower field responses. There were no differences for V3, consistent with previous suggestions that dorsal and ventral V3 are two halves of a single visual area, rather than distinct areas V3 and VP. PMID:25527151

  19. Visual Alert System

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A visual alert system resulted from circuitry developed by Applied Cybernetics Systems for Langley as part of a space related telemetry system. James Campman, Applied Cybernetics president, left the company and founded Grace Industries, Inc. to manufacture security devices based on the Langley technology. His visual alert system combines visual and audible alerts for hearing impaired people. The company also manufactures an arson detection device called the electronic nose, and is currently researching additional applications of the NASA technology.

  20. Delayed visual maturation.

    PubMed Central

    Cole, G F; Hungerford, J; Jones, R B

    1984-01-01

    Sixteen blind babies who were considered to be showing the characteristics of delayed visual maturation were studied prospectively. The diagnosis was made on clinical grounds, and the criteria for this are discussed. All of these infants developed visual responses between 4 and 6 months of age and had normal or near normal visual acuities by 1 year of age. Long term follow up, however, has shown neurological abnormalities in some of these children. PMID:6200080

  1. Progress in Scientific Visualization

    SciTech Connect

    Max, N

    2004-11-15

    Visualization of observed data or simulation output is important to science and engineering. I have been particularly interested in visualizing 3-D structures, and report here my personal impressions on progress in the last 20 years in visualizing molecules, scalar fields, and vector fields and their associated flows. I have tried to keep the survey and list of references manageable, so apologize to those authors whose techniques I have not mentioned, or have described without a reference citation.

  2. Visual Analytics 101

    SciTech Connect

    Scholtz, Jean; Burtner, Edwin R.; Cook, Kristin A.

    2016-06-13

    This course will introduce the field of Visual Analytics to HCI researchers and practitioners highlighting the contributions they can make to this field. Topics will include a definition of visual analytics along with examples of current systems, types of tasks and end users, issues in defining user requirements, design of visualizations and interactions, guidelines and heuristics, the current state of user-centered evaluations, and metrics for evaluation. We encourage designers, HCI researchers, and HCI practitioners to attend to learn how their skills can contribute to advancing the state of the art of visual analytics

  3. Differential effect of visual motion adaption upon visual cortical excitability.

    PubMed

    Lubeck, Astrid J A; Van Ombergen, Angelique; Ahmad, Hena; Bos, Jelte E; Wuyts, Floris L; Bronstein, Adolfo M; Arshad, Qadeer

    2017-03-01

    The objectives of this study were 1) to probe the effects of visual motion adaptation on early visual and V5/MT cortical excitability and 2) to investigate whether changes in cortical excitability following visual motion adaptation are related to the degree of visual dependency, i.e., an overreliance on visual cues compared with vestibular or proprioceptive cues. Participants were exposed to a roll motion visual stimulus before, during, and after visual motion adaptation. At these stages, 20 transcranial magnetic stimulation (TMS) pulses at phosphene threshold values were applied over early visual and V5/MT cortical areas from which the probability of eliciting a phosphene was calculated. Before and after adaptation, participants aligned the subjective visual vertical in front of the roll motion stimulus as a marker of visual dependency. During adaptation, early visual cortex excitability decreased whereas V5/MT excitability increased. After adaptation, both early visual and V5/MT excitability were increased. The roll motion-induced tilt of the subjective visual vertical (visual dependence) was not influenced by visual motion adaptation and did not correlate with phosphene threshold or visual cortex excitability. We conclude that early visual and V5/MT cortical excitability is differentially affected by visual motion adaptation. Furthermore, excitability in the early or late visual cortex is not associated with an increase in visual reliance during spatial orientation. Our findings complement earlier studies that have probed visual cortical excitability following motion adaptation and highlight the differential role of the early visual cortex and V5/MT in visual motion processing.NEW & NOTEWORTHY We examined the influence of visual motion adaptation on visual cortex excitability and found a differential effect in V1/V2 compared with V5/MT. Changes in visual excitability following motion adaptation were not related to the degree of an individual's visual dependency.

  4. Personal Visualization and Personal Visual Analytics.

    PubMed

    Huang, Dandan; Tory, Melanie; Aseniero, Bon Adriel; Bartram, Lyn; Bateman, Scott; Carpendale, Sheelagh; Tang, Anthony; Woodbury, Robert

    2015-03-01

    Data surrounds each and every one of us in our daily lives, ranging from exercise logs, to archives of our interactions with others on social media, to online resources pertaining to our hobbies. There is enormous potential for us to use these data to understand ourselves better and make positive changes in our lives. Visualization (Vis) and visual analytics (VA) offer substantial opportunities to help individuals gain insights about themselves, their communities and their interests; however, designing tools to support data analysis in non-professional life brings a unique set of research and design challenges. We investigate the requirements and research directions required to take full advantage of Vis and VA in a personal context. We develop a taxonomy of design dimensions to provide a coherent vocabulary for discussing personal visualization and personal visual analytics. By identifying and exploring clusters in the design space, we discuss challenges and share perspectives on future research. This work brings together research that was previously scattered across disciplines. Our goal is to call research attention to this space and engage researchers to explore the enabling techniques and technology that will support people to better understand data relevant to their personal lives, interests, and needs.

  5. Creativity, Visualization Abilities, and Visual Cognitive Style

    ERIC Educational Resources Information Center

    Kozhevnikov, Maria; Kozhevnikov, Michael; Yu, Chen Jiao; Blazhenkova, Olesya

    2013-01-01

    Background: Despite the recent evidence for a multi-component nature of both visual imagery and creativity, there have been no systematic studies on how the different dimensions of creativity and imagery might interrelate. Aims: The main goal of this study was to investigate the relationship between different dimensions of creativity (artistic and…

  6. English 3135: Visual Rhetoric

    ERIC Educational Resources Information Center

    Gatta, Oriana

    2013-01-01

    As an advanced rhetoric and composition doctoral student, I taught Engl 3135: Visual Rhetoric, a three-credit upper-level course offered by the Department of English at Georgia State University. Mary E. Hocks originally designed this course in 2000 to, in her words, "introduce visual information design theories and practices for writers [and]…

  7. Mandarin Visual Speech Information

    ERIC Educational Resources Information Center

    Chen, Trevor H.

    2010-01-01

    While the auditory-only aspects of Mandarin speech are heavily-researched and well-known in the field, this dissertation addresses its lesser-known aspects: The visual and audio-visual perception of Mandarin segmental information and lexical-tone information. Chapter II of this dissertation focuses on the audiovisual perception of Mandarin…

  8. Complex Digital Visual Systems

    ERIC Educational Resources Information Center

    Sweeny, Robert W.

    2013-01-01

    This article identifies possibilities for data visualization as art educational research practice. The author presents an analysis of the relationship between works of art and digital visual culture, employing aspects of network analysis drawn from the work of Barabási, Newman, and Watts (2006) and Castells (1994). Describing complex network…

  9. Visualizing Qualitative Information

    ERIC Educational Resources Information Center

    Slone, Debra J.

    2009-01-01

    The abundance of qualitative data in today's society and the need to easily scrutinize, digest, and share this information calls for effective visualization and analysis tools. Yet, no existing qualitative tools have the analytic power, visual effectiveness, and universality of familiar quantitative instruments like bar charts, scatter-plots, and…

  10. Milford Visual Communications Project.

    ERIC Educational Resources Information Center

    Milford Exempted Village Schools, OH.

    This study discusses a visual communications project designed to develop activities to promote visual literacy at the elementary and secondary school levels. The project has four phases: (1) perception of basic forms in the environment, what these forms represent, and how they inter-relate; (2) discovery and communication of more complex…

  11. Interactive Visualization of Dependencies

    ERIC Educational Resources Information Center

    Moreno, Camilo Arango; Bischof, Walter F.; Hoover, H. James

    2012-01-01

    We present an interactive tool for browsing course requisites as a case study of dependency visualization. This tool uses multiple interactive visualizations to allow the user to explore the dependencies between courses. A usability study revealed that the proposed browser provides significant advantages over traditional methods, in terms of…

  12. Visual Factors in Reading

    ERIC Educational Resources Information Center

    Singleton, Chris; Henderson, Lisa-Marie

    2006-01-01

    This article reviews current knowledge about how the visual system recognizes letters and words, and the impact on reading when parts of the visual system malfunction. The physiology of eye and brain places important constraints on how we process text, and the efficient organization of the neurocognitive systems involved is not inherent but…

  13. Visual Arts Research, 1994.

    ERIC Educational Resources Information Center

    Gardner, Nancy C., Ed.; Thompson, Christine, Ed.

    1994-01-01

    This document consists of the two issues of the journal "Visual Arts in Research" published in 1994. This journal focuses on the theory and practice of visual arts education from educational, historical, philosophical, and psychological perspectives. Number 1 of this volume includes the following contributions: (1) "Zooming in on the Qualitative…

  14. Normalized medical information visualization.

    PubMed

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Somolinos, Roberto; Castro, Antonio; Velázquez, Iker; Moreno, Oscar; García-Pacheco, José L; Pascual, Mario; Salvador, Carlos H

    2015-01-01

    A new mark-up programming language is introduced in order to facilitate and improve the visualization of ISO/EN 13606 dual model-based normalized medical information. This is the first time that visualization of normalized medical information is addressed and the programming language is intended to be used by medical non-IT professionals.

  15. Complicating Visual Culture

    ERIC Educational Resources Information Center

    Daiello, Vicki; Hathaway, Kevin; Rhoades, Mindi; Walker, Sydney

    2006-01-01

    Arguing for complicating the study of visual culture, as advocated by James Elkins, this article explicates and explores Lacanian psychoanalytic theory and pedagogy in view of its implications for art education practice. Subjectivity, a concept of import for addressing student identity and the visual, steers the discussion informed by pedagogical…

  16. Integrating Visual Literacy

    ERIC Educational Resources Information Center

    McLester, Susan

    2006-01-01

    A major influence this generation of high school digital natives has had on the curriculum is their natural focus on visuals to convey information. They are driving the move toward visual information. In this article, one high school expands the learning environment from handhelds to videoconferencing. Sacred Heart has found that mixing the…

  17. Visual Arts Research, 1995.

    ERIC Educational Resources Information Center

    Gardner, Nancy C., Ed.; Thompson, Christine, Ed.

    1995-01-01

    This document consists of the two issues of the journal "Visual Arts Research" published in 1995. This journal focuses on the theory and practice of visual arts education from educational, historical, philosophical, and psychological perspectives. Number 1 of this volume includes the following contributions: (1) "Children's Sensitivity to…

  18. Problems Confronting Visual Culture

    ERIC Educational Resources Information Center

    Efland, Arthur D.

    2005-01-01

    A new movement has appeared recommending, in part, that the field of art education should lessen its traditional ties to drawing, painting, and the study of masterpieces to become the study of visual culture. Visual cultural study refers to an all-encompassing category of cultural practice that includes the fine arts but also deals with the study…

  19. Program Supports Scientific Visualization

    NASA Technical Reports Server (NTRS)

    Keith, Stephan

    1994-01-01

    Primary purpose of General Visualization System (GVS) computer program is to support scientific visualization of data generated by panel-method computer program PMARC_12 (inventory number ARC-13362) on Silicon Graphics Iris workstation. Enables user to view PMARC geometries and wakes as wire frames or as light shaded objects. GVS is written in C language.

  20. Visual Complexity: A Review

    ERIC Educational Resources Information Center

    Donderi, Don C.

    2006-01-01

    The idea of visual complexity, the history of its measurement, and its implications for behavior are reviewed, starting with structuralism and Gestalt psychology at the beginning of the 20th century and ending with visual complexity theory, perceptual learning theory, and neural circuit theory at the beginning of the 21st. Evidence is drawn from…

  1. Bayesian Visual Odometry

    NASA Astrophysics Data System (ADS)

    Center, Julian L.; Knuth, Kevin H.

    2011-03-01

    Visual odometry refers to tracking the motion of a body using an onboard vision system. Practical visual odometry systems combine the complementary accuracy characteristics of vision and inertial measurement units. The Mars Exploration Rovers, Spirit and Opportunity, used this type of visual odometry. The visual odometry algorithms in Spirit and Opportunity were based on Bayesian methods, but a number of simplifying approximations were needed to deal with onboard computer limitations. Furthermore, the allowable motion of the rover had to be severely limited so that computations could keep up. Recent advances in computer technology make it feasible to implement a fully Bayesian approach to visual odometry. This approach combines dense stereo vision, dense optical flow, and inertial measurements. As with all true Bayesian methods, it also determines error bars for all estimates. This approach also offers the possibility of using Micro-Electro Mechanical Systems (MEMS) inertial components, which are more economical, weigh less, and consume less power than conventional inertial components.

  2. Visualization of JPEG Metadata

    NASA Astrophysics Data System (ADS)

    Malik Mohamad, Kamaruddin; Deris, Mustafa Mat

    There are a lot of information embedded in JPEG image than just graphics. Visualization of its metadata would benefit digital forensic investigator to view embedded data including corrupted image where no graphics can be displayed in order to assist in evidence collection for cases such as child pornography or steganography. There are already available tools such as metadata readers, editors and extraction tools but mostly focusing on visualizing attribute information of JPEG Exif. However, none have been done to visualize metadata by consolidating markers summary, header structure, Huffman table and quantization table in a single program. In this paper, metadata visualization is done by developing a program that able to summarize all existing markers, header structure, Huffman table and quantization table in JPEG. The result shows that visualization of metadata helps viewing the hidden information within JPEG more easily.

  3. Flow visualization in fluid mechanics

    NASA Astrophysics Data System (ADS)

    Freymuth, Peter

    1993-01-01

    The history of flow visualization is reviewed and basic methods are examined. A classification of the field of physical flow visualization is presented. The introduction of major methods is discussed and discoveries made using flow visualization are reviewed. Attention is given to limitations and problem areas in the visual evaluation of velocity and vorticity fields and future applications for flow visualization are suggested.

  4. Architecture for Teraflop Visualization

    SciTech Connect

    Breckenridge, A.R.; Haynes, R.A.

    1999-04-09

    Sandia Laboratories' computational scientists are addressing a very important question: How do we get insight from the human combined with the computer-generated information? The answer inevitably leads to using scientific visualization. Going one technology leap further is teraflop visualization, where the computing model and interactive graphics are an integral whole to provide computing for insight. In order to implement our teraflop visualization architecture, all hardware installed or software coded will be based on open modules and dynamic extensibility principles. We will illustrate these concepts with examples in our three main research areas: (1) authoring content (the computer), (2) enhancing precision and resolution (the human), and (3) adding behaviors (the physics).

  5. Visualization of electronic density

    DOE PAGES

    Grosso, Bastien; Cooper, Valentino R.; Pine, Polina; ...

    2015-04-22

    An atom’s volume depends on its electronic density. Although this density can only be evaluated exactly for hydrogen-like atoms, there are many excellent numerical algorithms and packages to calculate it for other materials. 3D visualization of charge density is challenging, especially when several molecular/atomic levels are intertwined in space. We explore several approaches to 3D charge density visualization, including the extension of an anaglyphic stereo visualization application based on the AViz package to larger structures such as nanotubes. We will describe motivations and potential applications of these tools for answering interesting questions about nanotube properties.

  6. Visual complexity: a review.

    PubMed

    Donderi, Don C

    2006-01-01

    The idea of visual complexity, the history of its measurement, and its implications for behavior are reviewed, starting with structuralism and Gestalt psychology at the beginning of the 20th century and ending with visual complexity theory, perceptual learning theory, and neural circuit theory at the beginning of the 21st. Evidence is drawn from research on single forms, form and texture arrays and visual displays. Form complexity and form probability are shown to be linked through their reciprocal relationship in complexity theory, which is in turn shown to be consistent with recent developments in perceptual learning and neural circuit theory. Directions for further research are suggested.

  7. Network Visualization Design Using Prefuse Visualization Toolkit

    DTIC Science & Technology

    2008-03-01

    Lipinski. “ JULIUS - An Extendable Software Framework for Surgi- cal Planning”. Caesar , Berlin, Germany, 2001. URL http://www.caesar.de/ fileadmin/user upload... Julius framework and Ball modelar . . . . . 13 2.9 PNode class hierarchy showing monolithic Piccolo toolkit design [3... JULIUS [24] (used for medical imaging). However, all frameworks studied, except one, selected OpenGL as 12 their graphical visualization toolkit. This

  8. Visual imagery without visual perception: lessons from blind subjects

    NASA Astrophysics Data System (ADS)

    Bértolo, Helder

    2014-08-01

    The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review some of the works providing evidence for both claims. It seems that studying visual imagery in blind subjects can be used as a way of answering some of those questions, namely if it is possible to have visual imagery without visual perception. We present results from the work of our group using visual activation in dreams and its relation with EEG's spectral components, showing that congenitally blind have visual contents in their dreams and are able to draw them; furthermore their Visual Activation Index is negatively correlated with EEG alpha power. This study supports the hypothesis that it is possible to have visual imagery without visual experience.

  9. Speaking the Visual Language.

    ERIC Educational Resources Information Center

    Helmken, Charles M.

    1979-01-01

    The development of fine design in alumni publications is traced and the roles of typography, photography, illustration, paper, printing, and color in designing a magazine are discussed. The nature and importance of visual communication are considered. (JMF)

  10. Sequential document visualization.

    PubMed

    Mao, Yi; Dillon, Joshua; Lebanon, Guy

    2007-01-01

    Documents and other categorical valued time series are often characterized by the frequencies of short range sequential patterns such as n-grams. This representation converts sequential data of varying lengths to high dimensional histogram vectors which are easily modeled by standard statistical models. Unfortunately, the histogram representation ignores most of the medium and long range sequential dependencies making it unsuitable for visualizing sequential data. We present a novel framework for sequential visualization of discrete categorical time series based on the idea of local statistical modeling. The framework embeds categorical time series as smooth curves in the multinomial simplex summarizing the progression of sequential trends. We discuss several visualization techniques based on the above framework and demonstrate their usefulness for document visualization.

  11. Visualization Design Environment

    SciTech Connect

    Pomplun, A.R.; Templet, G.J.; Jortner, J.N.; Friesen, J.A.; Schwegel, J.; Hughes, K.R.

    1999-02-01

    Improvements in the performance and capabilities of computer software and hardware system, combined with advances in Internet technologies, have spurred innovative developments in the area of modeling, simulation and visualization. These developments combine to make it possible to create an environment where engineers can design, prototype, analyze, and visualize components in virtual space, saving the time and expenses incurred during numerous design and prototyping iterations. The Visualization Design Centers located at Sandia National Laboratories are facilities built specifically to promote the ''design by team'' concept. This report focuses on designing, developing and deploying this environment by detailing the design of the facility, software infrastructure and hardware systems that comprise this new visualization design environment and describes case studies that document successful application of this environment.

  12. Case study: Wildfire visualization

    SciTech Connect

    Ahrens, J.; McCormick, P.; Bossert, J.; Reisner, J.; Winterkamp, J.

    1997-11-01

    The ability to forecast the progress of crisis events would significantly reduce human suffering and loss of life, the destruction of property, and expenditures for assessment and recovery. Los Alamos National Laboratory has established a scientific thrust in crisis forecasting to address this national challenge. In the initial phase of this project, scientists at Los Alamos are developing computer models to predict the spread of a wildfire. Visualization of the results of the wildfire simulation will be used by scientists to assess the quality of the simulation and eventually by fire personnel as a visual forecast of the wildfire`s evolution. The fire personnel and scientists want the visualization to look as realistic as possible without compromising scientific accuracy. This paper describes how the visualization was created, analyzes the tools and approach that was used, and suggests directions for future work and research.

  13. The Visual System

    MedlinePlus

    ... NEI for Kids > The Visual System All About Vision About the Eye Ask a Scientist Video Series ... Eye Health and Safety First Aid Tips Healthy Vision Tips Protective Eyewear Sports and Your Eyes Fun ...

  14. Training Visual Attention

    ERIC Educational Resources Information Center

    Mulholland, Thomas B.

    1974-01-01

    The effects of brain waves and alpha rhythms on attentiveness to visual stimuli are discussed, and preliminary research findings and research needs are considered in connection with measuring and training for attention. (LH)

  15. Toward Geometric Visual Servoing

    DTIC Science & Technology

    2002-09-26

    IEEE Transactions on Robotics and Automation, 17(4):507–515, 2001. [3] Noah Cowan...and Robert Mahony. Visual servong of an under-actuated dynamic rigid-body system: An image-based approach. IEEE Transactions on Robotics and...Automation, 18(2):187–198, April 2002. [7] S. Hutchinson, G. D. Hager, and P. I. Corke. A tutorial on visual servo control. IEEE Transactions on Robotics

  16. Extended Visual Servoing

    DTIC Science & Technology

    2003-04-01

    IEEE Transactions on Robotics and Au- tomation...8(3):313–326, June 1992. [8] Seth Hutchinson, Gregory D. Hager, and Pe- ter I. Corke. A tutorial on visual servo control. IEEE Transactions on Robotics and...and S. Boudet. 2 1/2 d visual servoing. IEEE Transactions on Robotics and Automation, 15(2):238–250, April 1999. [11] Robert Mandelbaum.

  17. Helicopter Visual Aid System

    NASA Technical Reports Server (NTRS)

    Baisley, R. L.

    1973-01-01

    The results of an evaluation of police helicopter effectiveness revealed a need for improved visual capability. A JPL program developed a method that would enhance visual observation capability for both day and night usage and demonstrated the feasibility of the adopted approach. This approach made use of remote pointable optics, a display screen, a slaved covert searchlight, and a coupled camera. The approach was proved feasible through field testing and by judgement against evaluation criteria.

  18. Visual illusions and hallucinations.

    PubMed

    Kölmel, H W

    1993-08-01

    Visual illusions and hallucinations may accompany a wide variety of disorders with many different aetiologies; therefore, they are non-specific phenomena. Lesions in the visual pathway may be associated with visual misperceptions. In these cases more exact information about the misperceptions--whether they are monocular or binocular, present in the whole visual field or a hemifield--may contribute to diagnostic accuracy and to a more comprehensive understanding of the patient and his state of mind. Illusions such as perseveration, monocular diplopia and polyopia, and dysmorphopsia may also occur in healthy individuals, but they are found most often in patients with epilepsy, migraine and stroke. These phenomena do not permit exact localization and definition of an aetiology, but lesions in the occipital and occipitotemporal regions near the visual pathway are involved in most cases. Hallucinations always represent a pathological form of perception. They are classified as unformed (photopsias) or formed (complex). Photopsias may be described in terms of colour, shape and brightness. Their wide variety makes it difficult, if not impossible, to arrive at an exact description of their aetiology, but it is possible to define their anatomical origin in some cases. Complex hallucinations suggest an occipitotemporal locus. Whether they appear in the whole visual field or in the hemifield may prove decisive in determining pathogenesis. A number of characteristics permit a rough classification of these phenomena. Complex hallucinations accompany physical illness and are susceptible to psychodynamic interpretation.

  19. The Levels of Visual Framing

    ERIC Educational Resources Information Center

    Rodriguez, Lulu; Dimitrova, Daniela V.

    2011-01-01

    While framing research has centered mostly on the evaluations of media texts, visual news discourse has remained relatively unexamined. This study surveys the visual framing techniques and methods employed in previous studies and proposes a four-tiered model of identifying and analyzing visual frames: (1) visuals as denotative systems, (2) visuals…

  20. Visual Literacy: A Bibliographic Survey.

    ERIC Educational Resources Information Center

    Jonassen, David; Fork, Donald J.

    Learning through the use of symbols presupposes the ability to think visually, including the perception, structuring, processing, and transformation of visual images. Since children process high volumes of visual messages, especially via television, schools should restructure curricula to include objectives in visual literacy, specifically the…

  1. The Elephants of Visual Literacy.

    ERIC Educational Resources Information Center

    Barley, Steven D., Ed.; Ball, Richard R., Ed.

    Visual literacy, as used here, refers to the skills which let a person understand and use visuals to communicate his messages and interpret the messages of others. Visual literacy should be important in the curriculum because: 1) children pay more attention to movies and television than they do to teachers; 2) the plethora of visual information…

  2. Perception and Attention for Visualization

    ERIC Educational Resources Information Center

    Haroz, Steve

    2013-01-01

    This work examines how a better understanding of visual perception and attention can impact visualization design. In a collection of studies, I explore how different levels of the visual system can measurably affect a variety of visualization metrics. The results show that expert preference, user performance, and even computational performance are…

  3. Visual Tools for Constructing Knowledge.

    ERIC Educational Resources Information Center

    Hyerle, David

    The brain works by making patterns, and this process can be visualized through a medium called visual tools. The book discusses three types of visual tools: brainstorming webs, task-specific organizers, and thinking-process maps. Sample lessons, assessments, and descriptions of visual tools in action are included. Emphasis is placed on the…

  4. Frameless Volume Visualization.

    PubMed

    Petkov, Kaloian; Kaufman, Arie E

    2016-02-01

    We have developed a novel visualization system based on the reconstruction of high resolution and high frame rate images from a multi-tiered stream of samples that are rendered framelessly. This decoupling of the rendering system from the display system is particularly suitable when dealing with very high resolution displays or expensive rendering algorithms, where the latency of generating complete frames may be prohibitively high for interactive applications. In contrast to the traditional frameless rendering technique, we generate the lowest latency samples on the optimal sampling lattice in the 3D domain. This approach avoids many of the artifacts associated with existing sample caching and reprojection methods during interaction that may not be acceptable in many visualization applications. Advanced visualization effects are generated remotely and streamed into the reconstruction system using tiered samples with varying latencies and quality levels. We demonstrate the use of our visualization system for the exploration of volumetric data at stable guaranteed frame rates on high resolution displays, including a 470 megapixel tiled display as part of the Reality Deck immersive visualization facility.

  5. Learning Science Through Visualization

    NASA Technical Reports Server (NTRS)

    Chaudhury, S. Raj

    2005-01-01

    In the context of an introductory physical science course for non-science majors, I have been trying to understand how scientific visualizations of natural phenomena can constructively impact student learning. I have also necessarily been concerned with the instructional and assessment approaches that need to be considered when focusing on learning science through visually rich information sources. The overall project can be broken down into three distinct segments : (i) comparing students' abilities to demonstrate proportional reasoning competency on visual and verbal tasks (ii) decoding and deconstructing visualizations of an object falling under gravity (iii) the role of directed instruction to elicit alternate, valid scientific visualizations of the structure of the solar system. Evidence of student learning was collected in multiple forms for this project - quantitative analysis of student performance on written, graded assessments (tests and quizzes); qualitative analysis of videos of student 'think aloud' sessions. The results indicate that there are significant barriers for non-science majors to succeed in mastering the content of science courses, but with informed approaches to instruction and assessment, these barriers can be overcome.

  6. Camouflage and visual perception

    PubMed Central

    Troscianko, Tom; Benton, Christopher P.; Lovell, P. George; Tolhurst, David J.; Pizlo, Zygmunt

    2008-01-01

    How does an animal conceal itself from visual detection by other animals? This review paper seeks to identify general principles that may apply in this broad area. It considers mechanisms of visual encoding, of grouping and object encoding, and of search. In most cases, the evidence base comes from studies of humans or species whose vision approximates to that of humans. The effort is hampered by a relatively sparse literature on visual function in natural environments and with complex foraging tasks. However, some general constraints emerge as being potentially powerful principles in understanding concealment—a ‘constraint’ here means a set of simplifying assumptions. Strategies that disrupt the unambiguous encoding of discontinuities of intensity (edges), and of other key visual attributes, such as motion, are key here. Similar strategies may also defeat grouping and object-encoding mechanisms. Finally, the paper considers how we may understand the processes of search for complex targets in complex scenes. The aim is to provide a number of pointers towards issues, which may be of assistance in understanding camouflage and concealment, particularly with reference to how visual systems can detect the shape of complex, concealed objects. PMID:18990671

  7. Appraising the role of visual threat in speeded detection and classification tasks

    PubMed Central

    Yue, Yue; Quinlan, Philip T.

    2015-01-01

    This research examines the speeded detection and, separately, classification of photographic images of animals. In the initial experiments each display contained various images of animals and, in the detection task, participants responded whether a display contained only images of birds or also included an oddball target image of a cat or dog. In the classification search task, a target was always present and participants classified this as an image of a cat or a dog. Half of the target images depicted the animal in a non-threatening state and the remaining half images depicted the animal in a threatening state. A complex pattern of effects emerged showing some evidence of more efficient detection of a threatening than non-threatening target. No corresponding pattern emerged in the data for the classification task. Next the tasks were repeated when the stimuli were more carefully matched in terms of general pose and salience of facial features. Now the effects in the detection task were reduced but more consistent than before. Threatening targets were more readily detected than non-threatening targets. In addition, non-threatening targets were more readily classified than threatening targets. The nature of these effects appears to reflect decisional/response mechanisms and not search processes. The performance benefit for the non-threatening images was replicated in a final classification task in which, on each trial, only a single peripheral image was presented. The results demonstrate that a number of different affective and perceptual factors can influence performance in speeded search tasks and these may well be confounded with the variation in threat content of the experimental stimuli. The evidence for the automatic detection of visual threat remains illusive. PMID:26136696

  8. Deficits in Letter-Speech Sound Associations but Intact Visual Conflict Processing in Dyslexia: Results from a Novel ERP-Paradigm

    PubMed Central

    Bakos, Sarolta; Landerl, Karin; Bartling, Jürgen; Schulte-Körne, Gerd; Moll, Kristina

    2017-01-01

    The reading and spelling deficits characteristic of developmental dyslexia (dyslexia) have been related to problems in phonological processing and in learning associations between letters and speech-sounds. Even when children with dyslexia have learned the letters and their corresponding speech sounds, letter-speech sound associations might still be less automatized compared to children with age-adequate literacy skills. In order to examine automaticity in letter-speech sound associations and to overcome some of the disadvantages associated with the frequently used visual-auditory oddball paradigm, we developed a novel electrophysiological letter-speech sound interference paradigm. This letter-speech sound interference paradigm was applied in a group of 9-year-old children with dyslexia (n = 36) and a group of typically developing (TD) children of similar age (n = 37). Participants had to indicate whether two letters look visually the same. In the incongruent condition (e.g., the letter pair A-a) there was a conflict between the visual information and the automatically activated phonological information; although the visual appearance of the two letters is different, they are both associated with the same speech sound. This conflict resulted in slower response times (RTs) in the incongruent than in the congruent (e.g., the letter pair A-e) condition. Furthermore, in the TD control group, the conflict resulted in fast and strong event-related potential (ERP) effects reflected in less negative N1 amplitudes and more positive conflict slow potentials (cSP) in the incongruent than in the congruent condition. However, the dyslexic group did not show any conflict-related ERP effects, implying that letter-speech sound associations are less automatized in this group. Furthermore, we examined general visual conflict processing in a control visual interference task, using false fonts. The conflict in this experiment was based purely on the visual similarity of the presented

  9. Deficits in Letter-Speech Sound Associations but Intact Visual Conflict Processing in Dyslexia: Results from a Novel ERP-Paradigm.

    PubMed

    Bakos, Sarolta; Landerl, Karin; Bartling, Jürgen; Schulte-Körne, Gerd; Moll, Kristina

    2017-01-01

    The reading and spelling deficits characteristic of developmental dyslexia (dyslexia) have been related to problems in phonological processing and in learning associations between letters and speech-sounds. Even when children with dyslexia have learned the letters and their corresponding speech sounds, letter-speech sound associations might still be less automatized compared to children with age-adequate literacy skills. In order to examine automaticity in letter-speech sound associations and to overcome some of the disadvantages associated with the frequently used visual-auditory oddball paradigm, we developed a novel electrophysiological letter-speech sound interference paradigm. This letter-speech sound interference paradigm was applied in a group of 9-year-old children with dyslexia (n = 36) and a group of typically developing (TD) children of similar age (n = 37). Participants had to indicate whether two letters look visually the same. In the incongruent condition (e.g., the letter pair A-a) there was a conflict between the visual information and the automatically activated phonological information; although the visual appearance of the two letters is different, they are both associated with the same speech sound. This conflict resulted in slower response times (RTs) in the incongruent than in the congruent (e.g., the letter pair A-e) condition. Furthermore, in the TD control group, the conflict resulted in fast and strong event-related potential (ERP) effects reflected in less negative N1 amplitudes and more positive conflict slow potentials (cSP) in the incongruent than in the congruent condition. However, the dyslexic group did not show any conflict-related ERP effects, implying that letter-speech sound associations are less automatized in this group. Furthermore, we examined general visual conflict processing in a control visual interference task, using false fonts. The conflict in this experiment was based purely on the visual similarity of the presented

  10. Neural Mechanisms of Bottom-Up Selection During Visual Search

    DTIC Science & Technology

    2007-11-02

    distractor was suppressed , and the response to the target grew. B. Averaged activity on compensated (thick black) and non-compensated (dotted) target...the remaining trials the target swapped positions with a distractor after a short delay called the target step delay and monkeys were rewarded for...in the FEF of monkeys trained to shift gaze to an oddball target stimulus among identical distractor stimuli (e.g. red target among green

  11. Data Visualization in Sociology

    PubMed Central

    Healy, Kieran; Moody, James

    2014-01-01

    Visualizing data is central to social scientific work. Despite a promising early beginning, sociology has lagged in the use of visual tools. We review the history and current state of visualization in sociology. Using examples throughout, we discuss recent developments in ways of seeing raw data and presenting the results of statistical modeling. We make a general distinction between those methods and tools designed to help explore datasets, and those designed to help present results to others. We argue that recent advances should be seen as part of a broader shift towards easier sharing of the code and data both between researchers and with wider publics, and encourage practitioners and publishers to work toward a higher and more consistent standard for the graphical display of sociological insights. PMID:25342872

  12. A feast of visualization

    NASA Astrophysics Data System (ADS)

    2008-12-01

    Strength through structure The visualization and assessment of inner human bone structures can provide better predictions of fracture risk due to osteoporosis. Using micro-computed tomography (µCT), Christoph Räth from the Max Planck Institute for Extraterrestrial Physics and colleagues based in Munich, Vienna and Salzburg have shown how complex lattice-shaped bone structures can be visualized. The structures were quantified by calculating certain "texture measures" that yield new information about the stability of the bone. A 3D visualization showing the variation with orientation of one of the texture measures for four different bone specimens (from left to right) is shown above. Such analyses may help us to improve our understanding of disease and drug-induced changes in bone structure (C Räth et al. 2008 New J. Phys. 10 125010).

  13. Data Visualization in Sociology.

    PubMed

    Healy, Kieran; Moody, James

    2014-07-01

    Visualizing data is central to social scientific work. Despite a promising early beginning, sociology has lagged in the use of visual tools. We review the history and current state of visualization in sociology. Using examples throughout, we discuss recent developments in ways of seeing raw data and presenting the results of statistical modeling. We make a general distinction between those methods and tools designed to help explore datasets, and those designed to help present results to others. We argue that recent advances should be seen as part of a broader shift towards easier sharing of the code and data both between researchers and with wider publics, and encourage practitioners and publishers to work toward a higher and more consistent standard for the graphical display of sociological insights.

  14. Multimodal brain visualization

    NASA Astrophysics Data System (ADS)

    Nadeem, Saad; Kaufman, Arie

    2016-03-01

    Current connectivity diagrams of human brain image data are either overly complex or overly simplistic. In this work we introduce simple yet accurate interactive visual representations of multiple brain image structures and the connectivity among them. We map cortical surfaces extracted from human brain magnetic resonance imaging (MRI) data onto 2D surfaces that preserve shape (angle), extent (area), and spatial (neighborhood) information for 2D (circular disk) and 3D (spherical) mapping, split these surfaces into separate patches, and cluster functional and diffusion tractography MRI connections between pairs of these patches. The resulting visualizations are easier to compute on and more visually intuitive to interact with than the original data, and facilitate simultaneous exploration of multiple data sets, modalities, and statistical maps.

  15. Visual Inspection of Surfaces

    NASA Technical Reports Server (NTRS)

    Hughes, David; Perez, Xavier

    2007-01-01

    This presentation evaluates the parameters that affect visual inspection of cleanliness. Factors tested include surface reflectance, surface roughness, size of the largest particle, exposure time, inspector and distance from sample surface. It is concluded that distance predictions were not great, particularly because the distance at which contamination is seen may depend on more variables than those tested. Most parameters estimates had confidence of 95% or better, except for exposure and reflectance. Additionally, the distance at which surface is visibly contaminated decreases with increasing reflectance, roughness, and exposure. The distance at which the surface is visually contaminated increased with the largest particle size. These variables were only slightly affected the observer.

  16. Visual exploration of images

    NASA Astrophysics Data System (ADS)

    Suaste-Gomez, Ernesto; Leybon, Jaime I.; Rodriguez, D.

    1998-07-01

    Visual scanpath has been an important work applied in neuro- ophthalmic and psychological studies. This is because it has been working like a tool to validate some pathologies such as visual perception in color or black/white images; color blindness; etc. On the other hand, this tool has reached a big field of applications such as marketing. The scanpath over a specific picture, shows the observer interest in color, shapes, letter size, etc.; even tough the picture be among a group of images, this tool has demonstrated to be helpful to catch people interest over a specific advertisement.

  17. Reduplication of visual stimuli.

    PubMed

    Young, A W; Hellawell, D J; Wright, S; Ellis, H D

    1994-01-01

    Investigation of P.T., a man who experienced reduplicative delusions, revealed significant impairments on tests of recognition memory for faces and understanding of emotional facial expressions. On formal tests of his recognition abilities, P.T. showed reduplication to familiar faces, buildings, and written names, but not to familiar voices. Reduplication may therefore have been a genuinely visual problem in P.T.'s case, since it was not found to auditory stimuli. This is consistent with hypotheses which propose that the basis of reduplication can lie in part in malfunction of the visual system.

  18. USGS Scientific Visualization Laboratory

    USGS Publications Warehouse

    ,

    1995-01-01

    The U.S. Geological Survey's (USGS) Scientific Visualization Laboratory at the National Center in Reston, Va., provides a central facility where USGS employees can use state-of-the-art equipment for projects ranging from presentation graphics preparation to complex visual representations of scientific data. Equipment including color printers, black-and-white and color scanners, film recorders, video equipment, and DOS, Apple Macintosh, and UNIX platforms with software are available for both technical and nontechnical users. The laboratory staff provides assistance and demonstrations in the use of the hardware and software products.

  19. Visual Inference Programming

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin; Timucin, Dogan; Rabbette, Maura; Curry, Charles; Allan, Mark; Lvov, Nikolay; Clanton, Sam; Pilewskie, Peter

    2002-01-01

    The goal of visual inference programming is to develop a software framework data analysis and to provide machine learning algorithms for inter-active data exploration and visualization. The topics include: 1) Intelligent Data Understanding (IDU) framework; 2) Challenge problems; 3) What's new here; 4) Framework features; 5) Wiring diagram; 6) Generated script; 7) Results of script; 8) Initial algorithms; 9) Independent Component Analysis for instrument diagnosis; 10) Output sensory mapping virtual joystick; 11) Output sensory mapping typing; 12) Closed-loop feedback mu-rhythm control; 13) Closed-loop training; 14) Data sources; and 15) Algorithms. This paper is in viewgraph form.

  20. CMS tracker visualization tools

    NASA Astrophysics Data System (ADS)

    Mennea, M. S.; Osborne, I.; Regano, A.; Zito, G.

    2005-08-01

    This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this sub-detector (more than 50 millions channels organized in 16540 modules each one of these being a complete detector), the standard CMS visualization tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and integration within CMS framework, respectively, have been complemented with additional 2D graphics objects. Based on the experience acquired using this software to debug and understand both hardware and software during the construction phase, we propose possible future improvements to cope with online monitoring and event analysis during data taking.

  1. Impaired visual competition in patients with homonymous visual field defects.

    PubMed

    Geuzebroek, A C; van den Berg, A V

    2017-03-01

    Intense visual training can lead to partial recovery of visual field defects caused by lesions of the primary visual cortex. However, the standard visual detection and discrimination tasks, used to assess this recovery process tend to ignore the complexity of the natural visual environment, where multiple stimuli continuously interact. Visual competition is an essential component for natural search tasks and detecting unexpected events. Our study focused on visual decision-making and to what extent the recovered visual field can compete for attention with the 'intact' visual field. Nine patients with visual field defects who had previously received visual discrimination training, were compared to healthy age-matched controls using a saccade target-selection paradigm, in which participants actively make a saccade towards the brighter of two flashed targets. To further investigate the nature of competition (feed-forward or feedback inhibition), we presented two flashes that reversed their intensity difference during the flash. Both competition between recovered visual field and intact visual field, as well as competition within the intact visual field, were assessed. Healthy controls showed the expected primacy effect; they preferred the initially brighter target. Surprisingly, choice behaviour, even in the patients' supposedly 'intact' visual field, was significantly different from the control group for all but one. In the latter patient, competition was comparable to the controls. All other patients showed a significantly reduced preference to the brighter target, but still showed a small hint of primacy in the reversal conditions. The present results indicate that patients and controls have similar decision-making mechanisms but patients' choices are affected by a strong tendency to guess, even in the intact visual field. This tendency likely reveals slower integration of information, paired with a lower threshold. Current rehabilitation should therefore also

  2. Oddball Cases of Fluid Mechanics: Cobwebs and Pharaohs

    ERIC Educational Resources Information Center

    Lafrance, Pierre

    1975-01-01

    Explains macroscopic properties of a number of systems as averaged-out behavior of numbers of particles. The approach is applied to a model of nuclear fission, rotational velocity in a galaxy, the nature of the rings of Saturn, oscillations of the earth, drops on a spider web, and the shape of ruined Meidum pyramid. (GH)

  3. Oddball Earths in Systems with Super-Earths

    NASA Astrophysics Data System (ADS)

    Asphaug, Erik

    2009-09-01

    During terrestrial planet formation most of the colliding matter comes late, in the form of similar-sized planetary bodies colliding at velocities ranging from 1 to a few times their mutual escape velocity (e.g. Wetherill 1985). I consider an edge effect under these conditions, where the next-largest bodies in a hierarchically accreting population (e.g. Earths) grow increasingly exotic as they survive non-accretionary collisions onto the largest bodies (e.g. super-Earths). Collisions between bodies within a factor of several in size, at around v_esc, are extended-source phenomena where the contact timescale equals the gravity timescale, and where for most geometries most of the colliding matter does not intersect. This sets similar-sized collisions, of which the giant impact formation of the Moon may be an example, far apart from point-source cratering impacts. It has been found and confirmed that in similar-sized collisions, hit-and-run is a more common outcome than efficient accretion. If the largest terrestrial planets grow by feeding on the next-largest planets, and if they eventually accrete most of these next-largest planets, then the surviving (unaccreted) next-largest planets are each likely to be survivors of one or more hit-and-run collisions. These events can cause the loss of atmospheres, oceans, crusts and outer mantles, and lead to exotic pressure-release petrology and degassing. This conclusion appears to be relatively scale invariant, provided the random velocities scale to v_esc, supporting the following corollary for solar systems with Super-Earths: If Earth-massed planets roamed among super-Earths, and survive as bounced-off unaccreted remnants from late stage accretion, then Earth-mass worlds will be as stunningly diverse in these solar systems, as the asteroids and smallest planets are (Moon, Mercury, Mars) in our own system, and may be devoid of atmospheres and oceans. Supported by the NASA Planetary Geology and Geophysics Program and the NASA Origin of the Solar System Program

  4. The Visualization Management System Approach To Visualization In Scientific Computing

    NASA Astrophysics Data System (ADS)

    Butler, David M.; Pendley, Michael H.

    1989-09-01

    We introduce the visualization management system (ViMS), a new approach to the development of software for visualization in scientific computing (ViSC). The conceptual foundation for a ViMS is an abstract visualization model which specifies a class of geometric objects, the graphic representations of the objects and the operations on both. A ViMS provides a modular implementation of its visualization model. We describe ViMS requirements and a model-independent ViMS architecture. We briefly describe the vector bundle visualization model and the visualization taxonomy it generates. We conclude by summarizing the benefits of the ViMS approach.

  5. Visualization of Traffic Accidents

    NASA Technical Reports Server (NTRS)

    Wang, Jie; Shen, Yuzhong; Khattak, Asad

    2010-01-01

    Traffic accidents have tremendous impact on society. Annually approximately 6.4 million vehicle accidents are reported by police in the US and nearly half of them result in catastrophic injuries. Visualizations of traffic accidents using geographic information systems (GIS) greatly facilitate handling and analysis of traffic accidents in many aspects. Environmental Systems Research Institute (ESRI), Inc. is the world leader in GIS research and development. ArcGIS, a software package developed by ESRI, has the capabilities to display events associated with a road network, such as accident locations, and pavement quality. But when event locations related to a road network are processed, the existing algorithm used by ArcGIS does not utilize all the information related to the routes of the road network and produces erroneous visualization results of event locations. This software bug causes serious problems for applications in which accurate location information is critical for emergency responses, such as traffic accidents. This paper aims to address this problem and proposes an improved method that utilizes all relevant information of traffic accidents, namely, route number, direction, and mile post, and extracts correct event locations for accurate traffic accident visualization and analysis. The proposed method generates a new shape file for traffic accidents and displays them on top of the existing road network in ArcGIS. Visualization of traffic accidents along Hampton Roads Bridge Tunnel is included to demonstrate the effectiveness of the proposed method.

  6. Aging and Visual Impairment.

    ERIC Educational Resources Information Center

    Morse, A. R.; And Others

    1987-01-01

    Eye diseases of the aged include diabetic retinopathy, senile cataracts, senile macular degeneration, and glaucoma. Environmental modifications such as better levels of illumination and reduction of glare can enhance an individual's ability to function. Programs to screen and treat visual problems in elderly persons are called for. (Author/JDD)

  7. Visualizing Dispersion Interactions

    ERIC Educational Resources Information Center

    Gottschalk, Elinor; Venkataraman, Bhawani

    2014-01-01

    An animation and accompanying activity has been developed to help students visualize how dispersion interactions arise. The animation uses the gecko's ability to walk on vertical surfaces to illustrate how dispersion interactions play a role in macroscale outcomes. Assessment of student learning reveals that students were able to develop…

  8. The Visual Identity Project

    ERIC Educational Resources Information Center

    Tennant-Gadd, Laurie; Sansone, Kristina Lamour

    2008-01-01

    Identity is the focus of the middle-school visual arts program at Cambridge Friends School (CFS) in Cambridge, Massachusetts. Sixth graders enter the middle school and design a personal logo as their first major project in the art studio. The logo becomes a way for students to introduce themselves to their teachers and to represent who they are…

  9. Extreme Scale Visual Analytics

    SciTech Connect

    Wong, Pak C.; Shen, Han-Wei; Pascucci, Valerio

    2012-05-08

    Extreme-scale visual analytics (VA) is about applying VA to extreme-scale data. The articles in this special issue examine advances related to extreme-scale VA problems, their analytical and computational challenges, and their real-world applications.

  10. Creating a Visualization Powerwall

    NASA Technical Reports Server (NTRS)

    Miller, B. H.; Lambert, J.; Zamora, K.

    1996-01-01

    From Introduction: This paper presents the issues of constructing a Visualization Powerwall. For each hardware component, the requirements, options an our solution are presented. This is followed by a short description of each pilot project. In the summary, current obstacles and options discovered along the way are presented.

  11. Activities: Visualization, Estimation, Computation.

    ERIC Educational Resources Information Center

    Maletsky, Evan M.

    1982-01-01

    The material is designed to help students build a cone model, visualize how its dimensions change as its shape changes, estimate maximum volume position, and develop problem-solving skills. Worksheets designed for duplication for classroom use are included. Part of the activity involves student analysis of a BASIC program. (MP)

  12. [Blindness and visual rehabilitation].

    PubMed

    Matonti, F; Roux, S; Denis, D; Picaud, S; Chavane, F

    2015-02-01

    Blindness and visual impairment are a major public health problem all over the world and in all societies. A large amount of basic science and clinical research aims to rehabilitate patients and help them become more independent. Various methods are explored from cell and molecular therapy to prosthetic interfaces. We review the various treatment alternatives, describing their results and their limitations.

  13. Visualizing Humans by Computer.

    ERIC Educational Resources Information Center

    Magnenat-Thalmann, Nadia

    1992-01-01

    Presents an overview of the problems and techniques involved in visualizing humans in a three-dimensional scene. Topics discussed include human shape modeling, including shape creation and deformation; human motion control, including facial animation and interaction with synthetic actors; and human rendering and clothing, including textures and…

  14. Visualization of Metadata.

    ERIC Educational Resources Information Center

    Beagle, Donald

    1999-01-01

    Discusses the potential for visualization of metadata and metadata-based surrogates, including a command interface for metadata viewing, site mapping and data aggregation tools, dynamic derivation of surrogates, and a reintroduction of transient hypergraphs from the tradition of cocitation networking. Addresses digital library research into…

  15. Visual Memory at Birth.

    ERIC Educational Resources Information Center

    Slater, Alan; And Others

    1982-01-01

    Explored new-born babys' capacity for forming visual memories. Used an habituation procedure that accommodated individual differences by allowing each infant to control the time course of habituation trials. Found significant novelty preference, providing strong evidence that recognition memory can be reliably demonstrated from birth. (Author/JAC)

  16. Curriculum: Managed Visual Reality.

    ERIC Educational Resources Information Center

    Gueulette, David G.

    This paper explores the association between the symbolized and the actualized, beginning with the prehistoric notion of a "reality double," in which no practical difference exists between pictorial representations, visual symbols, and real-life events and situations. Alchemists of the Middle Ages, with their paradoxical vision of the…

  17. Perception, Cognition, and Visualization.

    ERIC Educational Resources Information Center

    Arnheim, Rudolf

    1991-01-01

    Described are how pictures can combine aspects of naturalistic representation with more formal shapes to enhance cognitive understanding. These "diagrammatic" shapes derive from geometrical elementary and thereby bestow visual concreteness to concepts conveyed by the pictures. Leonardo da Vinci's anatomical drawings are used as examples…

  18. Robotic Intelligence Kernel: Visualization

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel-Visualization is the software that supports the user interface. It uses the RIK-C software to communicate information to and from the robot. The RIK-V illustrates the data in a 3D display and provides an operating picture wherein the user can task the robot.

  19. Visual Screening: A Procedure.

    ERIC Educational Resources Information Center

    Williams, Robert T.

    Vision is a complex process involving three phases: physical (acuity), physiological (integrative), and psychological (perceptual). Although these phases cannot be considered discrete, they provide the basis for the visual screening procedure used by the Reading Services of Colorado State University and described in this document. Ten tests are…

  20. Sounds Exaggerate Visual Shape

    ERIC Educational Resources Information Center

    Sweeny, Timothy D.; Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes…

  1. Solar System Visualizations

    NASA Technical Reports Server (NTRS)

    Brown, Alison M.

    2005-01-01

    Solar System Visualization products enable scientists to compare models and measurements in new ways that enhance the scientific discovery process, enhance the information content and understanding of the science results for both science colleagues and the public, and create.visually appealing and intellectually stimulating visualization products. Missions supported include MER, MRO, and Cassini. Image products produced include pan and zoom animations of large mosaics to reveal the details of surface features and topography, animations into registered multi-resolution mosaics to provide context for microscopic images, 3D anaglyphs from left and right stereo pairs, and screen captures from video footage. Specific products include a three-part context animation of the Cassini Enceladus encounter highlighting images from 350 to 4 meter per pixel resolution; Mars Reconnaissance Orbiter screen captures illustrating various instruments during assembly and testing at the Payload Hazardous Servicing Facility at Kennedy Space Center; and an animation of Mars Exploration Rover Opportunity's 'Rub al Khali' panorama where the rover was stuck in the deep fine sand for more than a month. This task creates new visualization products that enable new science results and enhance the public's understanding of the Solar System and NASA's missions of exploration.

  2. NCI Visuals Online

    Cancer.gov

    NCI Visuals Online contains images from the collections of the National Cancer Institute's Office of Communications and Public Liaison, including general biomedical and science-related images, cancer-specific scientific and patient care-related images, and portraits of directors and staff of the National Cancer Institute.

  3. Visual Environments for CFD Research

    NASA Technical Reports Server (NTRS)

    Watson, Val; George, Michael W. (Technical Monitor)

    1994-01-01

    This viewgraph presentation gives an overview of the visual environments for computational fluid dynamics (CFD) research. It includes details on critical needs from the future computer environment, features needed to attain this environment, prospects for changes in and the impact of the visualization revolution on the human-computer interface, human processing capabilities, limits of personal environment and the extension of that environment with computers. Information is given on the need for more 'visual' thinking (including instances of visual thinking), an evaluation of the alternate approaches for and levels of interactive computer graphics, a visual analysis of computational fluid dynamics, and an analysis of visualization software.

  4. Visual deficits in anisometropia

    PubMed Central

    Levi, Dennis M.; McKee, Suzanne P.; Movshon, J. Anthony

    2010-01-01

    Amblyopia is usually associated with the presence of anisometropia, strabismus or both early in life. We set out to explore quantitative relationships between the degree of anisometropia and the loss of visual function, and to examine how the presence of strabismus affects visual function in observers with anisometropia. We measured optotype acuity, Pelli-Robson contrast sensitivity and stereoacuity in 84 persons with anisometropia and compared their results with those of 27 persons with high bilateral refractive error (isoametropia) and 101 persons with both strabismus and anisometropia. All subjects participated in a large scale study of amblyopia (McKee, Levi & Movshon, 2003). We found no consistent visual abnormalities in the strong eye, and therefore report only on vision in the weaker, defined as the eye with lower acuity. LogMAR acuity falls off markedly with increasing anisometropia in non-strabismic anisometropes, while contrast sensitivity is much less affected. Acuity degrades rapidly with increases in both hyperopic and myopic anisometropia, but the risk of amblyopia is about twice as great in hyperopic than myopic anisometropes of comparable refractive imbalance. For a given degree of refractive imbalance, strabismic anisometropes perform considerably worse than anisometropes without strabismus – visual acuity for strabismics was on average 2.5 times worse than for non-strabismics with similar anisometropia. For observers with equal refractive error in the two eyes there is very little change in acuity or sensitivity with increasing (bilateral) refractive error except for one extreme individual (bilaterally refractive error of -15 D). Most pure anisometropes with interocular differences less than 4 D retain some stereopsis, and the degree is correlated with the acuity of the weak eye. We conclude that even modest interocular differences in refractive error can influence visual function. PMID:20932989

  5. Visualization by Demonstration: An Interaction Paradigm for Visual Data Exploration.

    PubMed

    Saket, Bahador; Kim, Hannah; Brown, Eli T; Endert, Alex

    2017-01-01

    Although data visualization tools continue to improve, during the data exploration process many of them require users to manually specify visualization techniques, mappings, and parameters. In response, we present the Visualization by Demonstration paradigm, a novel interaction method for visual data exploration. A system which adopts this paradigm allows users to provide visual demonstrations of incremental changes to the visual representation. The system then recommends potential transformations (Visual Representation, Data Mapping, Axes, and View Specification transformations) from the given demonstrations. The user and the system continue to collaborate, incrementally producing more demonstrations and refining the transformations, until the most effective possible visualization is created. As a proof of concept, we present VisExemplar, a mixed-initiative prototype that allows users to explore their data by recommending appropriate transformations in response to the given demonstrations.

  6. Infants' Visual Localization of Visual and Auditory Targets.

    ERIC Educational Resources Information Center

    Bechtold, A. Gordon; And Others

    This study is an investigation of 2-month-old infants' abilities to visually localize visual and auditory peripheral stimuli. Each subject (N=40) was presented with 50 trials; 25 of these visual and 25 auditory. The infant was placed in a semi-upright infant seat positioned 122 cm from the center speaker of an arc formed by five loudspeakers. At…

  7. Electrocorticography Reveals Enhanced Visual Cortex Responses to Visual Speech.

    PubMed

    Schepers, Inga M; Yoshor, Daniel; Beauchamp, Michael S

    2015-11-01

    Human speech contains both auditory and visual components, processed by their respective sensory cortices. We test a simple model in which task-relevant speech information is enhanced during cortical processing. Visual speech is most important when the auditory component is uninformative. Therefore, the model predicts that visual cortex responses should be enhanced to visual-only (V) speech compared with audiovisual (AV) speech. We recorded neuronal activity as patients perceived auditory-only (A), V, and AV speech. Visual cortex showed strong increases in high-gamma band power and strong decreases in alpha-band power to V and AV speech. Consistent with the model prediction, gamma-band increases and alpha-band decreases were stronger for V speech. The model predicts that the uninformative nature of the auditory component (not simply its absence) is the critical factor, a prediction we tested in a second experiment in which visual speech was paired with auditory white noise. As predicted, visual speech with auditory noise showed enhanced visual cortex responses relative to AV speech. An examination of the anatomical locus of the effects showed that all visual areas, including primary visual cortex, showed enhanced responses. Visual cortex responses to speech are enhanced under circumstances when visual information is most important for comprehension.

  8. Electrocorticography Reveals Enhanced Visual Cortex Responses to Visual Speech

    PubMed Central

    Schepers, Inga M.; Yoshor, Daniel; Beauchamp, Michael S.

    2015-01-01

    Human speech contains both auditory and visual components, processed by their respective sensory cortices. We test a simple model in which task-relevant speech information is enhanced during cortical processing. Visual speech is most important when the auditory component is uninformative. Therefore, the model predicts that visual cortex responses should be enhanced to visual-only (V) speech compared with audiovisual (AV) speech. We recorded neuronal activity as patients perceived auditory-only (A), V, and AV speech. Visual cortex showed strong increases in high-gamma band power and strong decreases in alpha-band power to V and AV speech. Consistent with the model prediction, gamma-band increases and alpha-band decreases were stronger for V speech. The model predicts that the uninformative nature of the auditory component (not simply its absence) is the critical factor, a prediction we tested in a second experiment in which visual speech was paired with auditory white noise. As predicted, visual speech with auditory noise showed enhanced visual cortex responses relative to AV speech. An examination of the anatomical locus of the effects showed that all visual areas, including primary visual cortex, showed enhanced responses. Visual cortex responses to speech are enhanced under circumstances when visual information is most important for comprehension. PMID:24904069

  9. Human factors in visualization research.

    PubMed

    Tory, Melanie; Möller, Torsten

    2004-01-01

    Visualization can provide valuable assistance for data analysis and decision making tasks. However, how people perceive and interact with a visualization tool can strongly influence their understanding of the data as well as the system's usefulness. Human factors therefore contribute significantly to the visualization process and should play an important role in the design and evaluation of visualization tools. Several research initiatives have begun to explore human factors in visualization, particularly in perception-based design. Nonetheless, visualization work involving human factors is in its infancy, and many potentially promising areas have yet to be explored. Therefore, this paper aims to 1) review known methodology for doing human factors research, with specific emphasis on visualization, 2) review current human factors research in visualization to provide a basis for future investigation, and 3) identify promising areas for future research.

  10. Visual hallucinations: charles bonnet syndrome.

    PubMed

    Jan, Tiffany; Del Castillo, Jorge

    2012-12-01

    The following is a case of Charles Bonnet syndrome in an 86-year-old woman who presented with visual hallucinations. The differential diagnosis of visual hallucinations is broad and emergency physicians should be knowledgeable of the possible etiologies.

  11. Comparison of tactile, auditory, and visual modality for brain-computer interface use: a case study with a patient in the locked-in state.

    PubMed

    Kaufmann, Tobias; Holz, Elisa M; Kübler, Andrea

    2013-01-01

    This paper describes a case study with a patient in the classic locked-in state, who currently has no means of independent communication. Following a user-centered approach, we investigated event-related potentials (ERP) elicited in different modalities for use in brain-computer interface (BCI) systems. Such systems could provide her with an alternative communication channel. To investigate the most viable modality for achieving BCI based communication, classic oddball paradigms (1 rare and 1 frequent stimulus, ratio 1:5) in the visual, auditory and tactile modality were conducted (2 runs per modality). Classifiers were built on one run and tested offline on another run (and vice versa). In these paradigms, the tactile modality was clearly superior to other modalities, displaying high offline accuracy even when classification was performed on single trials only. Consequently, we tested the tactile paradigm online and the patient successfully selected targets without any error. Furthermore, we investigated use of the visual or tactile modality for different BCI systems with more than two selection options. In the visual modality, several BCI paradigms were tested offline. Neither matrix-based nor so-called gaze-independent paradigms constituted a means of control. These results may thus question the gaze-independence of current gaze-independent approaches to BCI. A tactile four-choice BCI resulted in high offline classification accuracies. Yet, online use raised various issues. Although performance was clearly above chance, practical daily life use appeared unlikely when compared to other communication approaches (e.g., partner scanning). Our results emphasize the need for user-centered design in BCI development including identification of the best stimulus modality for a particular user. Finally, the paper discusses feasibility of EEG-based BCI systems for patients in classic locked-in state and compares BCI to other AT solutions that we also tested during the

  12. Processing Visual Images

    SciTech Connect

    Litke, Alan

    2006-03-27

    The back of the eye is lined by an extraordinary biological pixel detector, the retina. This neural network is able to extract vital information about the external visual world, and transmit this information in a timely manner to the brain. In this talk, Professor Litke will describe a system that has been implemented to study how the retina processes and encodes dynamic visual images. Based on techniques and expertise acquired in the development of silicon microstrip detectors for high energy physics experiments, this system can simultaneously record the extracellular electrical activity of hundreds of retinal output neurons. After presenting first results obtained with this system, Professor Litke will describe additional applications of this incredible technology.

  13. Hypermedia and visual technology

    NASA Technical Reports Server (NTRS)

    Walker, Lloyd

    1990-01-01

    Applications of a codified professional practice that uses visual representations of the thoughts and ideas of a working group are reported in order to improve productivity, problem solving, and innovation. This visual technology process was developed under the auspices of General Foods as part of a multi-year study. The study resulted in the validation of this professional service as a way to use art and design to facilitate productivity and innovation and to define new opportunities. It was also used by NASA for planning Lunar/Mars exploration and by other companies for general business and advanced strategic planning, developing new product concepts, and litigation support. General Foods has continued to use the service for packaging innovation studies.

  14. VisualPhysics.org

    NASA Astrophysics Data System (ADS)

    Sweetser, Douglas

    2009-05-01

    This new web site is designed to facilitate forming a community whose animations are every bit as sophisticated as the math encountered in any realm of physics, even quantum mechanics. All animations involve events embedded in spacetime. Events are treated mathematically as quaternions, with time as a scalar, and space as the 3-vector. A collection of quaternions can be sorted by time, put in the right frame for a finite animation, then with ray tracing software, drawn in space with shadows. The groups U(1), SU(2), U(1)xSU(2), and SU(3) all have new visual representations. In the talk I will present a visual explanation of the delayed-choice experiment of quantum mechanics.

  15. Exploring ensemble visualization

    NASA Astrophysics Data System (ADS)

    Phadke, Madhura N.; Pinto, Lifford; Alabi, Oluwafemi; Harter, Jonathan; Taylor, Russell M., II; Wu, Xunlei; Petersen, Hannah; Bass, Steffen A.; Healey, Christopher G.

    2012-01-01

    An ensemble is a collection of related datasets. Each dataset, or member, of an ensemble is normally large, multidimensional, and spatio-temporal. Ensembles are used extensively by scientists and mathematicians, for example, by executing a simulation repeatedly with slightly different input parameters and saving the results in an ensemble to see how parameter choices affect the simulation. To draw inferences from an ensemble, scientists need to compare data both within and between ensemble members. We propose two techniques to support ensemble exploration and comparison: a pairwise sequential animation method that visualizes locally neighboring members simultaneously, and a screen door tinting method that visualizes subsets of members using screen space subdivision. We demonstrate the capabilities of both techniques, first using synthetic data, then with simulation data of heavy ion collisions in high-energy physics. Results show that both techniques are capable of supporting meaningful comparisons of ensemble data.

  16. Direct interval volume visualization.

    PubMed

    Ament, Marco; Weiskopf, Daniel; Carr, Hamish

    2010-01-01

    We extend direct volume rendering with a unified model for generalized isosurfaces, also called interval volumes, allowing a wider spectrum of visual classification. We generalize the concept of scale-invariant opacity—typical for isosurface rendering—to semi-transparent interval volumes. Scale-invariant rendering is independent of physical space dimensions and therefore directly facilitates the analysis of data characteristics. Our model represents sharp isosurfaces as limits of interval volumes and combines them with features of direct volume rendering. Our objective is accurate rendering, guaranteeing that all isosurfaces and interval volumes are visualized in a crack-free way with correct spatial ordering. We achieve simultaneous direct and interval volume rendering by extending preintegration and explicit peak finding with data-driven splitting of ray integration and hybrid computation in physical and data domains. Our algorithm is suitable for efficient parallel processing for interactive applications as demonstrated by our CUDA implementation.

  17. Exploring Ensemble Visualization

    PubMed Central

    Phadke, Madhura N.; Pinto, Lifford; Alabi, Femi; Harter, Jonathan; Taylor, Russell M.; Wu, Xunlei; Petersen, Hannah; Bass, Steffen A.; Healey, Christopher G.

    2012-01-01

    An ensemble is a collection of related datasets. Each dataset, or member, of an ensemble is normally large, multidimensional, and spatio-temporal. Ensembles are used extensively by scientists and mathematicians, for example, by executing a simulation repeatedly with slightly different input parameters and saving the results in an ensemble to see how parameter choices affect the simulation. To draw inferences from an ensemble, scientists need to compare data both within and between ensemble members. We propose two techniques to support ensemble exploration and comparison: a pairwise sequential animation method that visualizes locally neighboring members simultaneously, and a screen door tinting method that visualizes subsets of members using screen space subdivision. We demonstrate the capabilities of both techniques, first using synthetic data, then with simulation data of heavy ion collisions in high-energy physics. Results show that both techniques are capable of supporting meaningful comparisons of ensemble data. PMID:22347540

  18. Selective deficit of mental visual imagery with intact primary visual cortex and visual perception.

    PubMed

    Moro, Valentina; Berlucchi, Giovanni; Lerch, Jason; Tomaiuolo, Francesco; Aglioti, Salvatore M

    2008-02-01

    There is a vigorous debate as to whether visual perception and imagery share the same neuronal networks, whether the primary visual cortex is necessarily involved in visual imagery, and whether visual imagery functions are lateralized in the brain. Two patients with brain damage from closed head injury were submitted to tests of mental imagery in the visual, tactile, auditory, gustatory, olfactory and motor domains, as well as to an extensive testing of cognitive functions. A computerized mapping procedure was used to localize the site and to assess the extent of the lesions. One patient showed pure visual mental imagery deficits in the absence of imagery deficits in other sensory domains as well as in the motor domain, while the other patient showed both visual and tactile imagery deficits. Perceptual, language, and memory deficits were conspicuously absent. Computerized analysis of the lesions showed a massive involvement of the left temporal lobe in both patients and a bilateral parietal lesion in one patient. In both patients the calcarine cortex with the primary visual area was bilaterally intact. Our study indicates that: (i) visual imagery deficits can occur independently from deficits of visual perception; (ii) visual imagery deficits can occur when the primary visual cortex is intact and (iii) the left temporal lobe plays an important role in visual mental imagery.

  19. Uncalibrated Dynamic Visual Servoing

    DTIC Science & Technology

    2003-05-01

    PREPARED FOR IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, PRINTED ON MAY 29, 2003 1 Uncalibrated Dynamic Visual Servoing J. A. Piepmeier, G.V...FOR IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, PRINTED ON MAY 29, 2003 2 is where and are...small or the target is PREPARED FOR IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, PRINTED ON MAY 29, 2003 3 stationary. The dynamic method should be

  20. Science information systems: Visualization

    NASA Technical Reports Server (NTRS)

    Wall, Ray J.

    1991-01-01

    Future programs in earth science, planetary science, and astrophysics will involve complex instruments that produce data at unprecedented rates and volumes. Current methods for data display, exploration, and discovery are inadequate. Visualization technology offers a means for the user to comprehend, explore, and examine complex data sets. The goal of this program is to increase the effectiveness and efficiency of scientists in extracting scientific information from large volumes of instrument data.

  1. Visual Perceptual Learning

    PubMed Central

    Lu, Zhong-Lin; Hua, Tianmiao; Huang, Chang-Bing; Zhou, Yifeng; Dosher, Barbara Anne

    2010-01-01

    Perceptual learning refers to the phenomenon that practice or training in perceptual tasks often substantially improves perceptual performance. Often exhibiting stimulus or task specificities, perceptual learning differs from learning in the cognitive or motor domains. Research on perceptual learning reveals important plasticity in adult perceptual systems, and as well as the limitations in the information processing of the human observer. In this article, we review the behavioral results, mechanisms, physiological basis, computational models, and applications of visual perceptual learning. PMID:20870024

  2. Warfighter Visualizations Compilations

    DTIC Science & Technology

    2013-05-01

    Steven Rogers at the Air Force Research Laboratory and Adam Rogers at Wright State University for their many helpful and insightful discussions...Eric Geiselman. Thanks also to Dr. Steven Rogers at the Air Force Research Laboratory and Adam Rogers at Wright State University for their 10...Broad-Scale Exploratory Analysis and Tracing.” Challenge 1 Honorable Mention: Comprehensive Visualization Suite. Laberge , L. Kaul, S., Anderson, N

  3. Advancement on Visualization Techniques

    DTIC Science & Technology

    1980-10-01

    Aeroa and As ronautics Massachusetts Institute of Technology Cambridge, MA 02139 USA I !ii 1 I This AGARDograph was prepared at the request of the...the fields of science § and technology relating to aerospace for the following purposes: - Exchanging of scientific and technical information...Techniques for providing the pilot visualization have grown rapidly. Technology has developed fron mechanical gauges through electro-mechanical

  4. Visual-vestibular interaction

    NASA Technical Reports Server (NTRS)

    Young, Laurence R.; Merfeld, D.

    1994-01-01

    Significant progress was achieved during the period of this grant on a number of different fronts. A list of publications, abstracts, and theses supported by this grant is provided at the end of this document. The completed studies focused on three general areas: eye movements induced by dynamic linear acceleration, eye movements and vection reports induced by visual roll stimulation, and the separation of gravito-inertial force into central estimates of gravity and linear acceleration.

  5. Visual Elements in Flight Simulation

    DTIC Science & Technology

    1975-07-01

    control. In consequence, current efforts tc create appropriate visual simulations run the gamut from efforts toward almost complete replication of the...create appropriate visual simulations run the gamut from efforts to create appropriate visual simulations run the gamut from efforts toward almost

  6. Visual Navigation in Nocturnal Insects.

    PubMed

    Warrant, Eric; Dacke, Marie

    2016-05-01

    Despite their tiny eyes and brains, nocturnal insects have evolved a remarkable capacity to visually navigate at night. Whereas some use moonlight or the stars as celestial compass cues to maintain a straight-line course, others use visual landmarks to navigate to and from their nest. These impressive abilities rely on highly sensitive compound eyes and specialized visual processing strategies in the brain.

  7. Visual Learning in Field Biology.

    ERIC Educational Resources Information Center

    Stanley, Ethel D.

    Visual learning has long been recognized as an integral process in educating biology undergraduates, who must develop specific visual skills and knowledge in order to communicate and work in the extensive visual culture shared by practicing biologists. Student experiences with field observations and the development and application of verbal/visual…

  8. Windows Memory Forensic Data Visualization

    DTIC Science & Technology

    2014-06-12

    WINDOWS MEMORY FORENSIC DATA VISUALIZATION THESIS J. Brendan Baum, Civilian, USAF AFIT...WINDOWS MEMORY FORENSIC DATA VISUALIZATION THESIS Presented to the Faculty Department of Electrical and Computer Engineering Graduate School of...APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED AFIT-ENG-T-14-J-1 WINDOWS MEMORY FORENSIC DATA VISUALIZATION J. Brendan Baum, B.S

  9. What makes a visualization memorable?

    PubMed

    Borkin, Michelle A; Vo, Azalea A; Bylinskii, Zoya; Isola, Phillip; Sunkavalli, Shashank; Oliva, Aude; Pfister, Hanspeter

    2013-12-01

    An ongoing debate in the Visualization community concerns the role that visualization types play in data understanding. In human cognition, understanding and memorability are intertwined. As a first step towards being able to ask questions about impact and effectiveness, here we ask: 'What makes a visualization memorable?' We ran the largest scale visualization study to date using 2,070 single-panel visualizations, categorized with visualization type (e.g., bar chart, line graph, etc.), collected from news media sites, government reports, scientific journals, and infographic sources. Each visualization was annotated with additional attributes, including ratings for data-ink ratios and visual densities. Using Amazon's Mechanical Turk, we collected memorability scores for hundreds of these visualizations, and discovered that observers are consistent in which visualizations they find memorable and forgettable. We find intuitive results (e.g., attributes like color and the inclusion of a human recognizable object enhance memorability) and less intuitive results (e.g., common graphs are less memorable than unique visualization types). Altogether our findings suggest that quantifying memorability is a general metric of the utility of information, an essential step towards determining how to design effective visualizations.

  10. A Primer of Visual Literacy.

    ERIC Educational Resources Information Center

    Dondis, Donis A.

    An investigation designed to increase the understanding and utilization of visual expression is undertaken. The book examines the basic visual elements, the strategies and options of the visual techniques, the psychological and physiological implications of creative composition, and the range of media and formats included among the arts and…

  11. Scouting for the Visually Handicapped.

    ERIC Educational Resources Information Center

    McMullen, A. Robert, Ed.

    Intended for parents of visually handicapped boys, the booklet describes advantages and opportunities of boy scouting for the visually handicapped. It is stressed that boys with visual handicaps are more like other boys than unlike them. Noted are practical ways to compensate for the boy's lack of sight such as Braille versions of the Scout…

  12. Visualization of Term Discrimination Analysis.

    ERIC Educational Resources Information Center

    Zhang, Jin; Wolfram, Dietmar

    2001-01-01

    Discusses information visualization techniques and introduces a visual term discrimination value analysis method using a document density space within a distance-angle-based visual information retrieval environment. Explains that applications of these methods facilitate more effective assignment of term weights to index terms within documents and…

  13. Personalized visual aesthetics

    NASA Astrophysics Data System (ADS)

    Vessel, Edward A.; Stahl, Jonathan; Maurer, Natalia; Denker, Alexander; Starr, G. G.

    2014-02-01

    How is visual information linked to aesthetic experience, and what factors determine whether an individual finds a particular visual experience pleasing? We have previously shown that individuals' aesthetic responses are not determined by objective image features but are instead a function of internal, subjective factors that are shaped by a viewers' personal experience. Yet for many classes of stimuli, culturally shared semantic associations give rise to similar aesthetic taste across people. In this paper, we investigated factors that govern whether a set of observers will agree in which images are preferred, or will instead exhibit more "personalized" aesthetic preferences. In a series of experiments, observers were asked to make aesthetic judgments for different categories of visual stimuli that are commonly evaluated in an aesthetic manner (faces, natural landscapes, architecture or artwork). By measuring agreement across observers, this method was able to reveal instances of highly individualistic preferences. We found that observers showed high agreement on their preferences for images of faces and landscapes, but much lower agreement for images of artwork and architecture. In addition, we found higher agreement for heterosexual males making judgments of beautiful female faces than of beautiful male faces. These results suggest that preferences for stimulus categories that carry evolutionary significance (landscapes and faces) come to rely on similar information across individuals, whereas preferences for artifacts of human culture such as architecture and artwork, which have fewer basic-level category distinctions and reduced behavioral relevance, rely on a more personalized set of attributes.

  14. Visualizing motion in video

    NASA Astrophysics Data System (ADS)

    Brown, Lisa M.; Crayne, Susan

    2000-05-01

    In this paper, we present a visualization system and method for measuring, inspecting and analyzing motion in video. Starting from a simple motion video, the system creates a still image representation which we call a digital strobe photograph. Similar to visualization techniques used in conventional film photography to capture high-speed motion using strobe lamps or very fast shutters, and to capture time-lapse motion where the shutter is left open, this methodology creates a single image showing the motion of one or a small number of objects over time. Based on digital background subtraction, we assume that the background is stationary or at most slowing changing and that the camera position is fixed. The method is capable of displaying the motion based on a parameter indicating the time step between successive movements. It can also overcome problems of visualizing movement that is obscured by previous movements. The method is used in an educational software tool for children to measure and analyze various motions. Examples are given using simple physical objects such as balls and pendulums, astronomical events such as the path of the stars around the north pole at night, or the different types of locomotion used by snakes.

  15. Visually induced reorientation illusions

    NASA Technical Reports Server (NTRS)

    Howard, I. P.; Hu, G.; Oman, C. M. (Principal Investigator)

    2001-01-01

    It is known that rotation of a furnished room around the roll axis of erect subjects produces an illusion of 360 degrees self-rotation in many subjects. Exposure of erect subjects to stationary tilted visual frames or rooms produces only up to 20 degrees of illusory tilt. But, in studies using static tilted rooms, subjects remained erect and the body axis was not aligned with the room. We have revealed a new class of disorientation illusions that occur in many subjects when placed in a 90 degrees or 180 degrees tilted room containing polarised objects (familiar objects with tops and bottoms). For example, supine subjects looking up at a wall of the room feel upright in an upright room and their arms feel weightless when held out from the body. We call this the levitation illusion. We measured the incidence of 90 degrees or 180 degrees reorientation illusions in erect, supine, recumbent, and inverted subjects in a room tilted 90 degrees or 180 degrees. We report that reorientation illusions depend on the displacement of the visual scene rather than of the body. However, illusions are most likely to occur when the visual and body axes are congruent. When the axes are congruent, illusions are least likely to occur when subjects are prone rather than supine, recumbent, or inverted.

  16. Mars Museum Visualization Alliance

    NASA Astrophysics Data System (ADS)

    Sohus, A. M.; Viotti, M. A.; de Jong, E. M.

    2004-11-01

    The Mars Museum Visualization Alliance is a collaborative effort funded by the Mars Public Engagement Office and supported by JPL's Informal Education staff and the Solar System Visualization Project to share the adventure of exploration and make Mars a real place. The effort started in 2002 with a small working group of museum professionals to learn how best to serve museum audiences through informal science educators. By the time the Mars Exploration Rovers landed on Mars in January 2004, over 100 organizations were partners in the Alliance, which has become a focused community of Mars educators. The Alliance provides guaranteed access to images, information, news, and resources for use by the informal science educators with their students, educators, and public audiences. Thousands of people have shared the adventure of exploring Mars and now see it as a real place through the efforts of the Mars Museum Visualization Alliance partners. The Alliance has been lauded for "providing just the right inside track for museums to do what they do best," be that webcasts, live presentations with the latest images and information, high-definition productions, planetarium shows, or hands-on educational activities. The Alliance is extending its mission component with Cassini, Genesis, Deep Impact, and Stardust. The Mars Exploration and Cassini Programs, as well as the Genesis, Deep Impact, and Stardust Projects, are managed for NASA by the Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California.

  17. Visualizing GO Annotations.

    PubMed

    Supek, Fran; Škunca, Nives

    2017-01-01

    Contemporary techniques in biology produce readouts for large numbers of genes simultaneously, the typical example being differential gene expression measurements. Moreover, those genes are often richly annotated using GO terms that describe gene function and that can be used to summarize the results of the genome-scale experiments. However, making sense of such GO enrichment analyses may be challenging. For instance, overrepresented GO functions in a set of differentially expressed genes are typically output as a flat list, a format not adequate to capture the complexities of the hierarchical structure of the GO annotation labels.In this chapter, we survey various methods to visualize large, difficult-to-interpret lists of GO terms. We catalog their availability-Web-based or standalone, the main principles they employ in summarizing large lists of GO terms, and the visualization styles they support. These brief commentaries on each software are intended as a helpful inventory, rather than comprehensive descriptions of the underlying algorithms. Instead, we show examples of their use and suggest that the choice of an appropriate visualization tool may be crucial to the utility of GO in biological discovery.

  18. DSN Data Visualization Suite

    NASA Technical Reports Server (NTRS)

    Bui, Bach X.; Malhotra, Mark R.; Kim, Richard M.

    2009-01-01

    The DSN Data Visualization Suite is a set of computer programs and reusable Application Programming Interfaces (APIs) that assist in the visualization and analysis of Deep Space Network (DSN) spacecraft-tracking data, which can include predicted and actual values of downlink frequencies, uplink frequencies, and antenna-pointing angles in various formats that can include tables of values and polynomial coefficients. The data can also include lists of antenna-pointing events, lists of antenna- limit events, and schedules of tracking activities. To date, analysis and correlation of these intricately related data before and after tracking have been difficult and time-consuming. The DSN Data Visualization Suite enables operators to quickly diagnose tracking-data problems before, during, and after tracking. The Suite provides interpolation on demand and plotting of DSN tracking data, correlation of all data on a given temporal point, and display of data with color coding configurable by users. The suite thereby enables rapid analysis of the data prior to transmission of the data to DSN control centers. At the control centers, the same suite enables operators to validate the data before committing the data to DSN subsystems. This software is also Web-enabled to afford its capabilities to international space agencies.

  19. Visual learning in multisensory environments.

    PubMed

    Jacobs, Robert A; Shams, Ladan

    2010-04-01

    We study the claim that multisensory environments are useful for visual learning because nonvisual percepts can be processed to produce error signals that people can use to adapt their visual systems. This hypothesis is motivated by a Bayesian network framework. The framework is useful because it ties together three observations that have appeared in the literature: (a) signals from nonvisual modalities can "teach" the visual system; (b) signals from nonvisual modalities can facilitate learning in the visual system; and (c) visual signals can become associated with (or be predicted by) signals from nonvisual modalities. Experimental data consistent with each of these observations are reviewed.

  20. Neuron analysis of visual perception

    NASA Technical Reports Server (NTRS)

    Chow, K. L.

    1980-01-01

    The receptive fields of single cells in the visual system of cat and squirrel monkey were studied investigating the vestibular input affecting the cells, and the cell's responses during visual discrimination learning process. The receptive field characteristics of the rabbit visual system, its normal development, its abnormal development following visual deprivation, and on the structural and functional re-organization of the visual system following neo-natal and prenatal surgery were also studied. The results of each individual part of each investigation are detailed.

  1. Visual feedback in stuttering therapy

    NASA Astrophysics Data System (ADS)

    Smolka, Elzbieta

    1997-02-01

    The aim of this paper is to present the results concerning the influence of visual echo and reverberation on the speech process of stutterers. Visual stimuli along with the influence of acoustic and visual-acoustic stimuli have been compared. Following this the methods of implementing visual feedback with the aid of electroluminescent diodes directed by speech signals have been presented. The concept of a computerized visual echo based on the acoustic recognition of Polish syllabic vowels has been also presented. All the research nd trials carried out at our center, aside from cognitive aims, generally aim at the development of new speech correctors to be utilized in stuttering therapy.

  2. Do Visual Illusions Probe the Visual Brain?: Illusions in Action without a Dorsal Visual Stream

    ERIC Educational Resources Information Center

    Coello, Yann; Danckert, James; Blangero, Annabelle; Rossetti, Yves

    2007-01-01

    Visual illusions have been shown to affect perceptual judgements more so than motor behaviour, which was interpreted as evidence for a functional division of labour within the visual system. The dominant perception-action theory argues that perception involves a holistic processing of visual objects or scenes, performed within the ventral,…

  3. What May Visualization Processes Optimize?

    PubMed

    Chen, Min; Golan, Amos

    2016-12-01

    In this paper, we present an abstract model of visualization and inference processes, and describe an information-theoretic measure for optimizing such processes. In order to obtain such an abstraction, we first examined six classes of workflows in data analysis and visualization, and identified four levels of typical visualization components, namely disseminative, observational, analytical and model-developmental visualization. We noticed a common phenomenon at different levels of visualization, that is, the transformation of data spaces (referred to as alphabets) usually corresponds to the reduction of maximal entropy along a workflow. Based on this observation, we establish an information-theoretic measure of cost-benefit ratio that may be used as a cost function for optimizing a data visualization process. To demonstrate the validity of this measure, we examined a number of successful visualization processes in the literature, and showed that the information-theoretic measure can mathematically explain the advantages of such processes over possible alternatives.

  4. Visual adaptation dominates bimodal visual-motor action adaptation

    PubMed Central

    de la Rosa, Stephan; Ferstl, Ylva; Bülthoff, Heinrich H.

    2016-01-01

    A long standing debate revolves around the question whether visual action recognition primarily relies on visual or motor action information. Previous studies mainly examined the contribution of either visual or motor information to action recognition. Yet, the interaction of visual and motor action information is particularly important for understanding action recognition in social interactions, where humans often observe and execute actions at the same time. Here, we behaviourally examined the interaction of visual and motor action recognition processes when participants simultaneously observe and execute actions. We took advantage of behavioural action adaptation effects to investigate behavioural correlates of neural action recognition mechanisms. In line with previous results, we find that prolonged visual exposure (visual adaptation) and prolonged execution of the same action with closed eyes (non-visual motor adaptation) influence action recognition. However, when participants simultaneously adapted visually and motorically – akin to simultaneous execution and observation of actions in social interactions - adaptation effects were only modulated by visual but not motor adaptation. Action recognition, therefore, relies primarily on vision-based action recognition mechanisms in situations that require simultaneous action observation and execution, such as social interactions. The results suggest caution when associating social behaviour in social interactions with motor based information. PMID:27029781

  5. Audio-visual integration through the parallel visual pathways.

    PubMed

    Kaposvári, Péter; Csete, Gergő; Bognár, Anna; Csibri, Péter; Tóth, Eszter; Szabó, Nikoletta; Vécsei, László; Sáry, Gyula; Kincses, Zsigmond Tamás

    2015-10-22

    Audio-visual integration has been shown to be present in a wide range of different conditions, some of which are processed through the dorsal, and others through the ventral visual pathway. Whereas neuroimaging studies have revealed integration-related activity in the brain, there has been no imaging study of the possible role of segregated visual streams in audio-visual integration. We set out to determine how the different visual pathways participate in this communication. We investigated how audio-visual integration can be supported through the dorsal and ventral visual pathways during the double flash illusion. Low-contrast and chromatic isoluminant stimuli were used to drive preferably the dorsal and ventral pathways, respectively. In order to identify the anatomical substrates of the audio-visual interaction in the two conditions, the psychophysical results were correlated with the white matter integrity as measured by diffusion tensor imaging.The psychophysiological data revealed a robust double flash illusion in both conditions. A correlation between the psychophysical results and local fractional anisotropy was found in the occipito-parietal white matter in the low-contrast condition, while a similar correlation was found in the infero-temporal white matter in the chromatic isoluminant condition. Our results indicate that both of the parallel visual pathways may play a role in the audio-visual interaction.

  6. Visual Computing Environment

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles; Putt, Charles W.

    1997-01-01

    The Visual Computing Environment (VCE) is a NASA Lewis Research Center project to develop a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis. The objectives of VCE are to (1) develop a visual computing environment for controlling the execution of individual simulation codes that are running in parallel and are distributed on heterogeneous host machines in a networked environment, (2) develop numerical coupling algorithms for interchanging boundary conditions between codes with arbitrary grid matching and different levels of dimensionality, (3) provide a graphical interface for simulation setup and control, and (4) provide tools for online visualization and plotting. VCE was designed to provide a distributed, object-oriented environment. Mechanisms are provided for creating and manipulating objects, such as grids, boundary conditions, and solution data. This environment includes parallel virtual machine (PVM) for distributed processing. Users can interactively select and couple any set of codes that have been modified to run in a parallel distributed fashion on

  7. Visually Exploring Transportation Schedules.

    PubMed

    Palomo, Cesar; Guo, Zhan; Silva, Cláudio T; Freire, Juliana

    2016-01-01

    Public transportation schedules are designed by agencies to optimize service quality under multiple constraints. However, real service usually deviates from the plan. Therefore, transportation analysts need to identify, compare and explain both eventual and systemic performance issues that must be addressed so that better timetables can be created. The purely statistical tools commonly used by analysts pose many difficulties due to the large number of attributes at trip- and station-level for planned and real service. Also challenging is the need for models at multiple scales to search for patterns at different times and stations, since analysts do not know exactly where or when relevant patterns might emerge and need to compute statistical summaries for multiple attributes at different granularities. To aid in this analysis, we worked in close collaboration with a transportation expert to design TR-EX, a visual exploration tool developed to identify, inspect and compare spatio-temporal patterns for planned and real transportation service. TR-EX combines two new visual encodings inspired by Marey's Train Schedule: Trips Explorer for trip-level analysis of frequency, deviation and speed; and Stops Explorer for station-level study of delay, wait time, reliability and performance deficiencies such as bunching. To tackle overplotting and to provide a robust representation for a large numbers of trips and stops at multiple scales, the system supports variable kernel bandwidths to achieve the level of detail required by users for different tasks. We justify our design decisions based on specific analysis needs of transportation analysts. We provide anecdotal evidence of the efficacy of TR-EX through a series of case studies that explore NYC subway service, which illustrate how TR-EX can be used to confirm hypotheses and derive new insights through visual exploration.

  8. The effect of visualization on visual search performance : Does visualization trump vision?

    PubMed

    F Clarke, Alasdair D; Barr, Courtney; Hunt, Amelia R

    2016-11-01

    Striking results recently demonstrated that visualizing search for a target can facilitate visual search for that target on subsequent trials (Reinhart et al. 2015). This visualization benefit was even greater than the benefit of actually repeating search for the target. We registered a close replication and generalization of the original experiment. Our results show clear benefits of repeatedly searching for the same target, but we found no benefit associated with visualization. The difficulty of the search task and the ability to monitor compliance with instructions to visualize are both possible explanations for the failure to replicate, and both should be carefully considered in future research exploring this interesting phenomenon.

  9. GVS - GENERAL VISUALIZATION SYSTEM

    NASA Technical Reports Server (NTRS)

    Keith, S. R.

    1994-01-01

    The primary purpose of GVS (General Visualization System) is to support scientific visualization of data output by the panel method PMARC_12 (inventory number ARC-13362) on the Silicon Graphics Iris computer. GVS allows the user to view PMARC geometries and wakes as wire frames or as light shaded objects. Additionally, geometries can be color shaded according to phenomena such as pressure coefficient or velocity. Screen objects can be interactively translated and/or rotated to permit easy viewing. Keyframe animation is also available for studying unsteady cases. The purpose of scientific visualization is to allow the investigator to gain insight into the phenomena they are examining, therefore GVS emphasizes analysis, not artistic quality. GVS uses existing IRIX 4.0 image processing tools to allow for conversion of SGI RGB files to other formats. GVS is a self-contained program which contains all the necessary interfaces to control interaction with PMARC data. This includes 1) the GVS Tool Box, which supports color histogram analysis, lighting control, rendering control, animation, and positioning, 2) GVS on-line help, which allows the user to access control elements and get information about each control simultaneously, and 3) a limited set of basic GVS data conversion filters, which allows for the display of data requiring simpler data formats. Specialized controls for handling PMARC data include animation and wakes, and visualization of off-body scan volumes. GVS is written in C-language for use on SGI Iris series computers running IRIX. It requires 28Mb of RAM for execution. Two separate hardcopy documents are available for GVS. The basic document price for ARC-13361 includes only the GVS User's Manual, which outlines major features of the program and provides a tutorial on using GVS with PMARC_12 data. Programmers interested in modifying GVS for use with data in formats other than PMARC_12 format may purchase a copy of the draft GVS 3.1 Software Maintenance

  10. Measurement of visual motion

    SciTech Connect

    Hildreth, E.C.

    1984-01-01

    This book examines the measurement of visual motion and the use of relative movement to locate the boundaries of physical objects in the environment. It investigates the nature of the computations that are necessary to perform this analysis by any vision system, biological or artificial. Contents: Introduction. Background. Computation of the Velocity Field. An Algorithm to Compute the Velocity Field. The Computation of Motion Discontinuities. Perceptual Studies of Motion Measurement. The Psychophysics of Discontinuity Detection. Neurophysiological Studies of Motion. Summary and Conclusions. References. Author and Subject Indexes.

  11. Methods of visualizing graphs

    DOEpatents

    Wong, Pak C.; Mackey, Patrick S.; Perrine, Kenneth A.; Foote, Harlan P.; Thomas, James J.

    2008-12-23

    Methods for visualizing a graph by automatically drawing elements of the graph as labels are disclosed. In one embodiment, the method comprises receiving node information and edge information from an input device and/or communication interface, constructing a graph layout based at least in part on that information, wherein the edges are automatically drawn as labels, and displaying the graph on a display device according to the graph layout. In some embodiments, the nodes are automatically drawn as labels instead of, or in addition to, the label-edges.

  12. Participatory visualization with Wordle.

    PubMed

    Viégas, Fernanda B; Wattenberg, Martin; Feinberg, Jonathan

    2009-01-01

    We discuss the design and usage of "Wordle," a web-based tool for visualizing text. Wordle creates tag-cloud-like displays that give careful attention to typography, color, and composition. We describe the algorithms used to balance various aesthetic criteria and create the distinctive Wordle layouts. We then present the results of a study of Wordle usage, based both on spontaneous behaviour observed in the wild, and on a large-scale survey of Wordle users. The results suggest that Wordles have become a kind of medium of expression, and that a "participatory culture" has arisen around them.

  13. Visualizing molecular unidirectional rotation

    NASA Astrophysics Data System (ADS)

    Lin, Kang; Song, Qiying; Gong, Xiaochun; Ji, Qinying; Pan, Haifeng; Ding, Jingxin; Zeng, Heping; Wu, Jian

    2015-07-01

    We directly visualize the spatiotemporal evolution of a unidirectional rotating molecular rotational wave packet. Excited by two time-delayed polarization-skewed ultrashort laser pulses, the cigar- or disk-shaped rotational wave packet is impulsively kicked to unidirectionally rotate as a quantum rotor which afterwards disperses and exhibits field-free revivals. The rich dynamics can be coherently controlled by varying the timing or polarization of the excitation laser pulses. The numerical simulations very well reproduce the experimental observations and intuitively revivify the thoroughgoing evolution of the molecular rotational wave packet of unidirectional spin.

  14. NASA Enterprise Visual Analysis

    NASA Technical Reports Server (NTRS)

    Lopez-Tellado, Maria; DiSanto, Brenda; Humeniuk, Robert; Bard, Richard, Jr.; Little, Mia; Edwards, Robert; Ma, Tien-Chi; Hollifield, Kenneith; White, Chuck

    2007-01-01

    NASA Enterprise Visual Analysis (NEVA) is a computer program undergoing development as a successor to Launch Services Analysis Tool (LSAT), formerly known as Payload Carrier Analysis Tool (PCAT). NEVA facilitates analyses of proposed configurations of payloads and packing fixtures (e.g. pallets) in a space shuttle payload bay for transport to the International Space Station. NEVA reduces the need to use physical models, mockups, and full-scale ground support equipment in performing such analyses. Using NEVA, one can take account of such diverse considerations as those of weight distribution, geometry, collision avoidance, power requirements, thermal loads, and mechanical loads.

  15. Creating visual explanations improves learning.

    PubMed

    Bobek, Eliza; Tversky, Barbara

    2016-01-01

    Many topics in science are notoriously difficult for students to learn. Mechanisms and processes outside student experience present particular challenges. While instruction typically involves visualizations, students usually explain in words. Because visual explanations can show parts and processes of complex systems directly, creating them should have benefits beyond creating verbal explanations. We compared learning from creating visual or verbal explanations for two STEM domains, a mechanical system (bicycle pump) and a chemical system (bonding). Both kinds of explanations were analyzed for content and learning assess by a post-test. For the mechanical system, creating a visual explanation increased understanding particularly for participants of low spatial ability. For the chemical system, creating both visual and verbal explanations improved learning without new teaching. Creating a visual explanation was superior and benefitted participants of both high and low spatial ability. Visual explanations often included crucial yet invisible features. The greater effectiveness of visual explanations appears attributable to the checks they provide for completeness and coherence as well as to their roles as platforms for inference. The benefits should generalize to other domains like the social sciences, history, and archeology where important information can be visualized. Together, the findings provide support for the use of learner-generated visual explanations as a powerful learning tool.

  16. Is Visual Imagery Really Visual? Overlooked Evidence from Neuropsychology.

    DTIC Science & Technology

    1987-08-07

    partially similar to visual representations in the way they code spatial relations but are not visual representations. This article reviews previously...representations in the way they code spatial relations but are not visual representations. This article reviews previously overlooked neuropsychological...the study of imagery. British Journal of Psychology, 47 101-114 Bauer,R. M.. & Rubens. A B (1985). Agnosia In K. M. Heilman & E. Valenstein (Ed Clinical

  17. Reconsidering Visual Search

    PubMed Central

    2015-01-01

    The visual search paradigm has had an enormous impact in many fields. A theme running through this literature has been the distinction between preattentive and attentive processing, which I refer to as the two-stage assumption. Under this assumption, slopes of set-size and response time are used to determine whether attention is needed for a given task or not. Even though a lot of findings question this two-stage assumption, it still has enormous influence, determining decisions on whether papers are published or research funded. The results described here show that the two-stage assumption leads to very different conclusions about the operation of attention for identical search tasks based only on changes in response (presence/absence versus Go/No-go responses). Slopes are therefore an ambiguous measure of attentional involvement. Overall, the results suggest that the two-stage model cannot explain all findings on visual search, and they highlight how slopes of response time and set-size should only be used with caution. PMID:27551357

  18. Grid Visualization Tool

    NASA Technical Reports Server (NTRS)

    Chouinard, Caroline; Fisher, Forest; Estlin, Tara; Gaines, Daniel; Schaffer, Steven

    2005-01-01

    The Grid Visualization Tool (GVT) is a computer program for displaying the path of a mobile robotic explorer (rover) on a terrain map. The GVT reads a map-data file in either portable graymap (PGM) or portable pixmap (PPM) format, representing a gray-scale or color map image, respectively. The GVT also accepts input from path-planning and activity-planning software. From these inputs, the GVT generates a map overlaid with one or more rover path(s), waypoints, locations of targets to be explored, and/or target-status information (indicating success or failure in exploring each target). The display can also indicate different types of paths or path segments, such as the path actually traveled versus a planned path or the path traveled to the present position versus planned future movement along a path. The program provides for updating of the display in real time to facilitate visualization of progress. The size of the display and the map scale can be changed as desired by the user. The GVT was written in the C++ language using the Open Graphics Library (OpenGL) software. It has been compiled for both Sun Solaris and Linux operating systems.

  19. The visually impaired patient.

    PubMed

    Rosenberg, Eric A; Sperazza, Laura C

    2008-05-15

    Blindness or low vision affects more than 3 million Americans 40 years and older, and this number is projected to reach 5.5 million by 2020. In addition to treating a patient's vision loss and comorbid medical issues, physicians must be aware of the physical limitations and social issues associated with vision loss to optimize health and independent living for the visually impaired patient. In the United States, the four most prevalent etiologies of vision loss in persons 40 years and older are age-related macular degeneration, cataracts, glaucoma, and diabetic retinopathy. Exudative macular degeneration is treated with laser therapy, and progression of nonexudative macular degeneration in its advanced stages may be slowed with high-dose antioxidant and zinc regimens. The value of screening for glaucoma is uncertain; management of this condition relies on topical ocular medications. Cataract symptoms include decreased visual acuity, decreased color perception, decreased contrast sensitivity, and glare disability. Lifestyle and environmental interventions can improve function in patients with cataracts, but surgery is commonly performed if the condition worsens. Diabetic retinopathy responds to tight glucose control, and severe cases marked by macular edema are treated with laser photocoagulation. Vision-enhancing devices can help magnify objects, and nonoptical interventions include special filters and enhanced lighting.

  20. Interactive Terascale Particle Visualization

    NASA Technical Reports Server (NTRS)

    Ellsworth, David; Green, Bryan; Moran, Patrick

    2004-01-01

    This paper describes the methods used to produce an interactive visualization of a 2 TB computational fluid dynamics (CFD) data set using particle tracing (streaklines). We use the method introduced by Bruckschen et al. [2001] that pre-computes a large number of particles, stores them on disk using a space-filling curve ordering that minimizes seeks, and then retrieves and displays the particles according to the user's command. We describe how the particle computation can be performed using a PC cluster, how the algorithm can be adapted to work with a multi-block curvilinear mesh, and how the out-of-core visualization can be scaled to 296 billion particles while still achieving interactive performance on PG hardware. Compared to the earlier work, our data set size and total number of particles are an order of magnitude larger. We also describe a new compression technique that allows the lossless compression of the particles by 41% and speeds the particle retrieval by about 30%.

  1. Visualizing cellulase activity.

    PubMed

    Bubner, Patricia; Plank, Harald; Nidetzky, Bernd

    2013-06-01

    Commercial exploitation of lignocellulose for biotechnological production of fuels and commodity chemicals requires efficient-usually enzymatic-saccharification of the highly recalcitrant insoluble substrate. A key characteristic of cellulose conversion is that the actual hydrolysis of the polysaccharide chains is intrinsically entangled with physical disruption of substrate morphology and structure. This "substrate deconstruction" by cellulase activity is a slow, yet markedly dynamic process that occurs at different length scales from and above the nanometer range. Little is currently known about the role of progressive substrate deconstruction on hydrolysis efficiency. Application of advanced visualization techniques to the characterization of enzymatic degradation of different celluloses has provided important new insights, at the requisite nano-scale resolution and down to the level of single enzyme molecules, into cellulase activity on the cellulose surface. Using true in situ imaging, dynamic features of enzyme action and substrate deconstruction were portrayed at different morphological levels of the cellulose, thus providing new suggestions and interpretations of rate-determining factors. Here, we review the milestones achieved through visualization, the methods which significantly promoted the field, compare suitable (model) substrates, and identify limiting factors, challenges and future tasks.

  2. Visualizing dipole radiation

    NASA Astrophysics Data System (ADS)

    Girwidz, Raimund V.

    2016-11-01

    The Hertzian dipole is fundamental to the understanding of dipole radiation. It provides basic insights into the genesis of electromagnetic waves and lays the groundwork for an understanding of half-wave antennae and other types. Equations for the electric and magnetic fields of such a dipole can be derived mathematically. However these are very abstract descriptions. Interpreting these equations and understanding travelling electromagnetic waves are highly limited in that sense. Visualizations can be a valuable supplement that vividly present properties of electromagnetic fields and their propagation. The computer simulation presented below provides additional instructive illustrations for university lectures on electrodynamics, broadening the experience well beyond what is possible with abstract equations. This paper refers to a multimedia program for PCs, tablets and smartphones, and introduces and discusses several animated illustrations. Special features of multiple representations and combined illustrations will be used to provide insight into spatial and temporal characteristics of field distributions—which also draw attention to the flow of energy. These visualizations offer additional information, including the relationships between different representations that promote deeper understanding. Finally, some aspects are also illustrated that often remain unclear in lectures.

  3. Tactical visualization module

    NASA Astrophysics Data System (ADS)

    Kachejian, Kerry C.; Vujcic, Doug

    1999-07-01

    The Tactical Visualization Module (TVM) research effort will develop and demonstrate a portable, tactical information system to enhance the situational awareness of individual warfighters and small military units by providing real-time access to manned and unmanned aircraft, tactically mobile robots, and unattended sensors. TVM consists of a family of portable and hand-held devices being advanced into a next- generation, embedded capability. It enables warfighters to visualize the tactical situation by providing real-time video, imagery, maps, floor plans, and 'fly-through' video on demand. When combined with unattended ground sensors, such as Combat- Q, TVM permits warfighters to validate and verify tactical targets. The use of TVM results in faster target engagement times, increased survivability, and reduction of the potential for fratricide. TVM technology can support both mounted and dismounted tactical forces involved in land, sea, and air warfighting operations. As a PCMCIA card, TVM can be embedded in portable, hand-held, and wearable PCs. Thus, it leverages emerging tactical displays including flat-panel, head-mounted displays. The end result of the program will be the demonstration of the system with U.S. Army and USMC personnel in an operational environment. Raytheon Systems Company, the U.S. Army Soldier Systems Command -- Natick RDE Center (SSCOM- NRDEC) and the Defense Advanced Research Projects Agency (DARPA) are partners in developing and demonstrating the TVM technology.

  4. LONI visualization environment.

    PubMed

    Dinov, Ivo D; Valentino, Daniel; Shin, Bae Cheol; Konstantinidis, Fotios; Hu, Guogang; MacKenzie-Graham, Allan; Lee, Erh-Fang; Shattuck, David; Ma, Jeff; Schwartz, Craig; Toga, Arthur W

    2006-06-01

    Over the past decade, the use of informatics to solve complex neuroscientific problems has increased dramatically. Many of these research endeavors involve examining large amounts of imaging, behavioral, genetic, neurobiological, and neuropsychiatric data. Superimposing, processing, visualizing, or interpreting such a complex cohort of datasets frequently becomes a challenge. We developed a new software environment that allows investigators to integrate multimodal imaging data, hierarchical brain ontology systems, on-line genetic and phylogenic databases, and 3D virtual data reconstruction models. The Laboratory of Neuro Imaging visualization environment (LONI Viz) consists of the following components: a sectional viewer for imaging data, an interactive 3D display for surface and volume rendering of imaging data, a brain ontology viewer, and an external database query system. The synchronization of all components according to stereotaxic coordinates, region name, hierarchical ontology, and genetic labels is achieved via a comprehensive BrainMapper functionality, which directly maps between position, structure name, database, and functional connectivity information. This environment is freely available, portable, and extensible, and may prove very useful for neurobiologists, neurogenetisists, brain mappers, and for other clinical, pedagogical, and research endeavors.

  5. Visualizing Interstellar's Wormhole

    NASA Astrophysics Data System (ADS)

    James, Oliver; von Tunzelmann, Eugénie; Franklin, Paul; Thorne, Kip S.

    2015-06-01

    Christopher Nolan's science fiction movie Interstellar offers a variety of opportunities for students in elementary courses on general relativity theory. This paper describes such opportunities, including: (i) At the motivational level, the manner in which elementary relativity concepts underlie the wormhole visualizations seen in the movie; (ii) At the briefest computational level, instructive calculations with simple but intriguing wormhole metrics, including, e.g., constructing embedding diagrams for the three-parameter wormhole that was used by our visual effects team and Christopher Nolan in scoping out possible wormhole geometries for the movie; (iii) Combining the proper reference frame of a camera with solutions of the geodesic equation, to construct a light-ray-tracing map backward in time from a camera's local sky to a wormhole's two celestial spheres; (iv) Implementing this map, for example, in Mathematica, Maple or Matlab, and using that implementation to construct images of what a camera sees when near or inside a wormhole; (v) With the student's implementation, exploring how the wormhole's three parameters influence what the camera sees—which is precisely how Christopher Nolan, using our implementation, chose the parameters for Interstellar's wormhole; (vi) Using the student's implementation, exploring the wormhole's Einstein ring and particularly the peculiar motions of star images near the ring, and exploring what it looks like to travel through a wormhole.

  6. Expanding the Frontiers of Visual Analytics and Visualization

    SciTech Connect

    Dill, John; Earnshaw, Rae; Kasik, David; Vince, John; Wong, Pak C.

    2012-05-31

    Expanding the Frontiers of Visual Analytics and Visualization contains international contributions by leading researchers from within the field. Dedicated to the memory of Jim Thomas, the book begins with the dynamics of evolving a vision based on some of the principles that Jim and colleagues established and in which Jim’s leadership was evident. This is followed by chapters in the areas of visual analytics, visualization, interaction, modelling, architecture, and virtual reality, before concluding with the key area of technology transfer to industry.

  7. An Assessment of Visual Testing

    SciTech Connect

    Cumblidge, Stephen E.; Anderson, Michael T.; Doctor, Steven R.

    2004-11-01

    In response to increasing interest from nuclear utilities in replacing some volumetric examinations of nuclear reactor components with remote visual testing, the Pacific Northwest National Laboratory has examined the capabilities of remote visual testing for the Nuclear Regulatory Commission. This report describes visual testing and explores the visual acuities of the camera systems used to examine nuclear reactor components. The types and sizes of cracks typically found in nuclear reactor components are reviewed. The current standards in visual testing are examined critically, and several suggestions for improving these standards are proposed. Also proposed for future work is a round robin test to determine the effectiveness of visual tests and experimental studies to determine the values for magnification and resolution needed to reliably image very tight cracks.

  8. Are Deaf Students Visual Learners?

    PubMed Central

    Marschark, Marc; Morrison, Carolyn; Lukomski, Jennifer; Borgna, Georgianna; Convertino, Carol

    2013-01-01

    It is frequently assumed that by virtue of their hearing losses, deaf students are visual learners. Deaf individuals have some visual-spatial advantages relative to hearing individuals, but most have been are linked to use of sign language rather than auditory deprivation. How such cognitive differences might affect academic performance has been investigated only rarely. This study examined relations among deaf college students’ language and visual-spatial abilities, mathematics problem solving, and hearing thresholds. Results extended some previous findings and clarified others. Contrary to what might be expected, hearing students exhibited visual-spatial skills equal to or better than deaf students. Scores on a Spatial Relations task were associated with better mathematics problem solving. Relations among the several variables, however, suggested that deaf students are no more likely to be visual learners than hearing students and that their visual-spatial skill may be related more to their hearing than to sign language skills. PMID:23750095

  9. Visualization of Magnetically Confined Plasmas

    SciTech Connect

    J.L.V. Lewandowski

    1999-12-10

    With the rapid developments in experimental and theoretical fusion energy research towards more geometric details, visualization plays an increasingly important role. In this paper we will give an overview of how visualization can be used to compare and contrast some different configurations for future fusion reactors. Specifically we will focus on the stellarator and tokamak concepts. In order to gain understanding of the underlying fundamental differences and similarities these two competing concepts are compared and contrasted by visualizing some key attributes.

  10. Information efficiency in visual communication

    NASA Technical Reports Server (NTRS)

    Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1993-01-01

    This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.

  11. Visual field defects in onchocerciasis.

    PubMed Central

    Thylefors, B; Tønjum, A M

    1978-01-01

    Lesions in the posterior segment of the eye in onchocerciasis may give visual field defects, but so far no detailed investigation has been done to determine the functional visual loss. Examination of the visual fields in 18 selected cases of onchocerciasis by means of a tangent screen test revealed important visual field defects associated with lesions in the posterior segment of the eye. Involvement of the optic nerve seemed to be important, giving rise to severely constricted visual fields. Cases of postneuritic optic atrophy showed a very uniform pattern of almost completely constricted visual fields, with only 5 to 10 degree central rest spared. Papillitis gave a similar severe constriction of the visual fields. The pattern of visual fields associated with optic neuropathy in onchocerciasis indicates that a progressive lesion of the optic nerve from the periphery may be responsible for the loss of vision. The visual field defects in onchocerciasis constitute a serious handicap, which must be taken into consideration when estimating the socioeconomic importance of the disease. Images PMID:678499

  12. Scientific Visualization, Seeing the Unseeable

    ScienceCinema

    LBNL

    2016-07-12

    June 24, 2008 Berkeley Lab lecture: Scientific visualization transforms abstract data into readily comprehensible images, provide a vehicle for "seeing the unseeable," and play a central role in bo... June 24, 2008 Berkeley Lab lecture: Scientific visualization transforms abstract data into readily comprehensible images, provide a vehicle for "seeing the unseeable," and play a central role in both experimental and computational sciences. Wes Bethel, who heads the Scientific Visualization Group in the Computational Research Division, presents an overview of visualization and computer graphics, current research challenges, and future directions for the field.

  13. The Statistics of Visual Representation

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.; Rahman, Zia-Ur; Woodell, Glenn A.

    2002-01-01

    The experience of retinex image processing has prompted us to reconsider fundamental aspects of imaging and image processing. Foremost is the idea that a good visual representation requires a non-linear transformation of the recorded (approximately linear) image data. Further, this transformation appears to converge on a specific distribution. Here we investigate the connection between numerical and visual phenomena. Specifically the questions explored are: (1) Is there a well-defined consistent statistical character associated with good visual representations? (2) Does there exist an ideal visual image? And (3) what are its statistical properties?

  14. Visual Computing Environment Workshop

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles (Compiler)

    1998-01-01

    The Visual Computing Environment (VCE) is a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis.

  15. Distributed Visualization Project

    NASA Technical Reports Server (NTRS)

    Craig, Douglas; Conroy, Michael; Kickbusch, Tracey; Mazone, Rebecca

    2016-01-01

    Distributed Visualization allows anyone, anywhere to see any simulation at any time. Development focuses on algorithms, software, data formats, data systems and processes to enable sharing simulation-based information across temporal and spatial boundaries without requiring stakeholders to possess highly-specialized and very expensive display systems. It also introduces abstraction between the native and shared data, which allows teams to share results without giving away proprietary or sensitive data. The initial implementation of this capability is the Distributed Observer Network (DON) version 3.1. DON 3.1 is available for public release in the NASA Software Store (https://software.nasa.gov/software/KSC-13775) and works with version 3.0 of the Model Process Control specification (an XML Simulation Data Representation and Communication Language) to display complex graphical information and associated Meta-Data.

  16. Visual observations from space

    NASA Technical Reports Server (NTRS)

    Garriott, O. K.

    1979-01-01

    Visual observations from space reveal a number of fascinating natural phenomena of interest to meteorologists and aeronomists, such as aurorae, airglow, aerosol layers, lightning, and atmospheric refraction effects. Other man-made radiation, including city lights and laser beacons, are also of considerable interest. Of course, the most widely used space observations are of the large-scale weather systems viewed each day by millions of people on their local television. From lower altitudes than the geostationary meteorological satellite orbits, obliques and overlapping stereo views are possible, allow height information to be obtained directly, often a key element in the use of the photographs for research purposes. However, this note will discuss only the less common observations mentioned at the beginning of this paragraph.

  17. [Visual synthesis of speech].

    PubMed

    Blanco, Y; Villanueva, A; Cabeza, R

    2000-01-01

    The eyes can come to be the sole tool of communication for highly disabled patients. With the appropriate technology it is possible to successfully interpret eye movements, increasing the possibilities of patient communication with the use of speech synthesisers. A system of these characteristics will have to include a speech synthesiser, an interface for the user to construct the text and a method of gaze interpretation. In this way a situation will be achieved in which the user will manage the system solely with his eyes. This review sets out the state of the art of the three modules that make up a system of this type, and finally it introduces the speech synthesis system (Síntesis Visual del Habla [SiVHa]), which is being developed in the Public University of Navarra.

  18. Visual observations over oceans

    NASA Technical Reports Server (NTRS)

    Terry, R. D.

    1979-01-01

    Important factors in locating, identifying, describing, and photographing ocean features from space are presented. On the basis of crew comments and other findings, the following recommendations can be made for Earth observations on Space Shuttle missions: (1) flyover exercises must include observations and photography of both temperate and tropical/subtropical waters; (2) sunglint must be included during some observations of ocean features; (3) imaging remote sensors should be used together with conventional photographic systems to document visual observations; (4) greater consideration must be given to scheduling earth observation targets likely to be obscured by clouds; and (5) an annotated photographic compilation of ocean features can be used as a training aid before the mission and as a reference book during space flight.

  19. Mismatch Receptive Fields in Mouse Visual Cortex.

    PubMed

    Zmarz, Pawel; Keller, Georg B

    2016-11-23

    In primary visual cortex, a subset of neurons responds when a particular stimulus is encountered in a certain location in visual space. This activity can be modeled using a visual receptive field. In addition to visually driven activity, there are neurons in visual cortex that integrate visual and motor-related input to signal a mismatch between actual and predicted visual flow. Here we show that these mismatch neurons have receptive fields and signal a local mismatch between actual and predicted visual flow in restricted regions of visual space. These mismatch receptive fields are aligned to the retinotopic map of visual cortex and are similar in size to visual receptive fields. Thus, neurons with mismatch receptive fields signal local deviations of actual visual flow from visual flow predicted based on self-motion and could therefore underlie the detection of objects moving relative to the visual flow caused by self-motion. VIDEO ABSTRACT.

  20. Visual PEF Reader - VIPER

    NASA Technical Reports Server (NTRS)

    Luo, Victor; Khanampornpan, Teerapat; Boehmer, Rudy A.; Kim, Rachel Y.

    2011-01-01

    This software graphically displays all pertinent information from a Predicted Events File (PEF) using the Java Swing framework, which allows for multi-platform support. The PEF is hard to weed through when looking for specific information and it is a desire for the MRO (Mars Reconn aissance Orbiter) Mission Planning & Sequencing Team (MPST) to have a different way to visualize the data. This tool will provide the team with a visual way of reviewing and error-checking the sequence product. The front end of the tool contains much of the aesthetically appealing material for viewing. The time stamp is displayed in the top left corner, and highlighted details are displayed in the bottom left corner. The time bar stretches along the top of the window, and the rest of the space is allotted for blocks and step functions. A preferences window is used to control the layout of the sections along with the ability to choose color and size of the blocks. Double-clicking on a block will show information contained within the block. Zooming into a certain level will graphically display that information as an overlay on the block itself. Other functions include using hotkeys to navigate, an option to jump to a specific time, enabling a vertical line, and double-clicking to zoom in/out. The back end involves a configuration file that allows a more experienced user to pre-define the structure of a block, a single event, or a step function. The individual will have to determine what information is important within each block and what actually defines the beginning and end of a block. This gives the user much more flexibility in terms of what the tool is searching for. In addition to the configurability, all the settings in the preferences window are saved in the configuration file as well

  1. Moon Color Visualizations

    NASA Technical Reports Server (NTRS)

    1990-01-01

    These color visualizations of the Moon were obtained by the Galileo spacecraft as it left the Earth after completing its first Earth Gravity Assist. The image on the right was acquired at 6:47 p.m. PST Dec. 8, 1990, from a distance of almost 220,000 miles, while that on the left was obtained at 9:35 a.m. PST Dec. 9, at a range of more than 350,000 miles. On the right, the nearside of the Moon and about 30 degrees of the far side (left edge) are visible. In the full disk on the left, a little less than half the nearside and more than half the far side (to the right) are visible. The color composites used images taken through the violet and two near infrared filters. The visualizations depict spectral properties of the lunar surface known from analysis of returned samples to be related to composition or weathering of surface materials. The greenish-blue region at the upper right in the full disk and the upper part of the right hand picture is Oceanus Procellarum. The deeper blue mare regions here and elsewhere are relatively rich in titanium, while the greens, yellows and light oranges indicate basalts low in titanium but rich in iron and magnesium. The reds (deep orange in the right hand picture) are typically cratered highlands relatively poor in titanium, iron and magnesium. In the full disk picture on the left, the yellowish area to the south is part of the newly confirmed South Pole Aitken basin, a large circular depression some 1,200 miles across, perhaps rich in iron and magnesium. Analysis of Apollo lunar samples provided the basis for calibration of this spectral map; Galileo data, in turn, permit broad extrapolation of the Apollo based composition information, reaching ultimately to the far side of the Moon.

  2. VMD: visual molecular dynamics.

    PubMed

    Humphrey, W; Dalke, A; Schulten, K

    1996-02-01

    VMD is a molecular graphics program designed for the display and analysis of molecular assemblies, in particular biopolymers such as proteins and nucleic acids. VMD can simultaneously display any number of structures using a wide variety of rendering styles and coloring methods. Molecules are displayed as one or more "representations," in which each representation embodies a particular rendering method and coloring scheme for a selected subset of atoms. The atoms displayed in each representation are chosen using an extensive atom selection syntax, which includes Boolean operators and regular expressions. VMD provides a complete graphical user interface for program control, as well as a text interface using the Tcl embeddable parser to allow for complex scripts with variable substitution, control loops, and function calls. Full session logging is supported, which produces a VMD command script for later playback. High-resolution raster images of displayed molecules may be produced by generating input scripts for use by a number of photorealistic image-rendering applications. VMD has also been expressly designed with the ability to animate molecular dynamics (MD) simulation trajectories, imported either from files or from a direct connection to a running MD simulation. VMD is the visualization component of MDScope, a set of tools for interactive problem solving in structural biology, which also includes the parallel MD program NAMD, and the MDCOMM software used to connect the visualization and simulation programs. VMD is written in C++, using an object-oriented design; the program, including source code and extensive documentation, is freely available via anonymous ftp and through the World Wide Web.

  3. ERP Evidence of Visualization at Early Stages of Visual Processing

    ERIC Educational Resources Information Center

    Page, Jonathan W.; Duhamel, Paul; Crognale, Michael A.

    2011-01-01

    Recent neuroimaging research suggests that early visual processing circuits are activated similarly during visualization and perception but have not demonstrated that the cortical activity is similar in character. We found functional equivalency in cortical activity by recording evoked potentials while color and luminance patterns were viewed and…

  4. Strengthening the Visual Element in Visual Media Materials.

    ERIC Educational Resources Information Center

    Wilhelm, R. Dwight

    1996-01-01

    Describes how to more effectively communicate the visual element in video and audiovisual materials. Discusses identifying a central topic, developing the visual content without words, preparing a storyboard, testing its effectiveness on people who are unacquainted with the production, and writing the script with as few words as possible. (AEF)

  5. Sounds activate visual cortex and improve visual discrimination.

    PubMed

    Feng, Wenfeng; Störmer, Viola S; Martinez, Antigona; McDonald, John J; Hillyard, Steven A

    2014-07-16

    A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound.

  6. Sounds Activate Visual Cortex and Improve Visual Discrimination

    PubMed Central

    Störmer, Viola S.; Martinez, Antigona; McDonald, John J.; Hillyard, Steven A.

    2014-01-01

    A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. PMID:25031419

  7. Data visualization methods, data visualization devices, data visualization apparatuses, and articles of manufacture

    DOEpatents

    Turner, Alan E.; Crow, Vernon L.; Payne, Deborah A.; Hetzler, Elizabeth G.; Cook, Kristin A.; Cowley, Wendy E.

    2015-06-30

    Data visualization methods, data visualization devices, data visualization apparatuses, and articles of manufacture are described according to some aspects. In one aspect, a data visualization method includes accessing a plurality of initial documents at a first moment in time, first processing the initial documents providing processed initial documents, first identifying a plurality of first associations of the initial documents using the processed initial documents, generating a first visualization depicting the first associations, accessing a plurality of additional documents at a second moment in time after the first moment in time, second processing the additional documents providing processed additional documents, second identifying a plurality of second associations of the additional documents and at least some of the initial documents, wherein the second identifying comprises identifying using the processed initial documents and the processed additional documents, and generating a second visualization depicting the second associations.

  8. Visual feature extraction and establishment of visual tags in the intelligent visual internet of things

    NASA Astrophysics Data System (ADS)

    Zhao, Yiqun; Wang, Zhihui

    2015-12-01

    The Internet of things (IOT) is a kind of intelligent networks which can be used to locate, track, identify and supervise people and objects. One of important core technologies of intelligent visual internet of things ( IVIOT) is the intelligent visual tag system. In this paper, a research is done into visual feature extraction and establishment of visual tags of the human face based on ORL face database. Firstly, we use the principal component analysis (PCA) algorithm for face feature extraction, then adopt the support vector machine (SVM) for classifying and face recognition, finally establish a visual tag for face which is already classified. We conducted a experiment focused on a group of people face images, the result show that the proposed algorithm have good performance, and can show the visual tag of objects conveniently.

  9. Neural Anatomy of Primary Visual Cortex Limits Visual Working Memory.

    PubMed

    Bergmann, Johanna; Genç, Erhan; Kohler, Axel; Singer, Wolf; Pearson, Joel

    2016-01-01

    Despite the immense processing power of the human brain, working memory storage is severely limited, and the neuroanatomical basis of these limitations has remained elusive. Here, we show that the stable storage limits of visual working memory for over 9 s are bound by the precise gray matter volume of primary visual cortex (V1), defined by fMRI retinotopic mapping. Individuals with a bigger V1 tended to have greater visual working memory storage. This relationship was present independently for both surface size and thickness of V1 but absent in V2, V3 and for non-visual working memory measures. Additional whole-brain analyses confirmed the specificity of the relationship to V1. Our findings indicate that the size of primary visual cortex plays a critical role in limiting what we can hold in mind, acting like a gatekeeper in constraining the richness of working mental function.

  10. Relating Standardized Visual Perception Measures to Simulator Visual System Performance

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K.; Sweet, Barbara T.

    2013-01-01

    Human vision is quantified through the use of standardized clinical vision measurements. These measurements typically include visual acuity (near and far), contrast sensitivity, color vision, stereopsis (a.k.a. stereo acuity), and visual field periphery. Simulator visual system performance is specified in terms such as brightness, contrast, color depth, color gamut, gamma, resolution, and field-of-view. How do these simulator performance characteristics relate to the perceptual experience of the pilot in the simulator? In this paper, visual acuity and contrast sensitivity will be related to simulator visual system resolution, contrast, and dynamic range; similarly, color vision will be related to color depth/color gamut. Finally, we will consider how some characteristics of human vision not typically included in current clinical assessments could be used to better inform simulator requirements (e.g., relating dynamic characteristics of human vision to update rate and other temporal display characteristics).

  11. From Quantification to Visualization: A Taxonomy of Uncertainty Visualization Approaches

    PubMed Central

    Potter, Kristin; Rosen, Paul; Johnson, Chris R.

    2014-01-01

    Quantifying uncertainty is an increasingly important topic across many domains. The uncertainties present in data come with many diverse representations having originated from a wide variety of disciplines. Communicating these uncertainties is a task often left to visualization without clear connection between the quantification and visualization. In this paper, we first identify frequently occurring types of uncertainty. Second, we connect those uncertainty representations to ones commonly used in visualization. We then look at various approaches to visualizing this uncertainty by partitioning the work based on the dimensionality of the data and the dimensionality of the uncertainty. We also discuss noteworthy exceptions to our taxonomy along with future research directions for the uncertainty visualization community. PMID:25663949

  12. Visual influences on primate encephalization.

    PubMed

    Kirk, E Christopher

    2006-07-01

    Primates differ from most other mammals in having relatively large brains. As a result, numerous comparative studies have attempted to identify the selective variables influencing primate encephalization. However, none have examined the effect of the total amount of visual input on relative brain size. According to Jerison's principle of proper mass, functional areas of the brain devoted primarily to processing visual information should exhibit increases in size when the amount of visual input to those areas increases. As a result, the total amount of visual input to the brain could exert a large influence on encephalization because visual areas comprise a large proportion of total brain mass in primates. The goal of this analysis is to test the expectation of a direct relationship between visual input and encephalization using optic foramen size and optic nerve size as proxies for total visual input. Data were collected for a large comparative sample of primates and carnivorans, and three primary analyses were undertaken. First, the relationship between relative proxies for visual input and relative endocranial volume were examined using partial correlations and phylogenetic comparative methods. Second, to examine the generality of the results derived for extant primates, a parallel series of partial correlation and comparative analyses were undertaken using data for carnivorans. Third, data for various Eocene and Oligocene primates were compared with those for living primates in order to determine whether the fossil taxa demonstrate a similar relationship between relative brain size and visual input. All three analyses confirm the expectations of proper mass and favor the conclusion that the amount of visual input has been a major influence on the evolution of relative brain size in both primates and carnivorans. Furthermore, this study suggests that differences in visual input may partly explain (1) the high encephalization of primates relative to the primitive

  13. Heredity Factors in Spatial Visualization.

    ERIC Educational Resources Information Center

    Vandenberg, S. G.

    Spatial visualization is not yet clearly understood. Some researchers have concluded that two factors or abilities are involved, spatial orientation and spatial visualization. Different definitions and different tests have been proposed for these two abilities. Several studies indicate that women generally perform more poorly on spatial tests than…

  14. Visual Aspects of Written Composition.

    ERIC Educational Resources Information Center

    Autrey, Ken

    While attempting to refine and redefine the composing process, rhetoric teachers have overlooked research showing how the brain's visual and verbal components interrelate. Recognition of the brain's visual potential can mean more than the use of media with the written word--it also has implications for the writing process itself. For example,…

  15. VISUAL TRAINING AND READING PERFORMANCE.

    ERIC Educational Resources Information Center

    ANAPOLLE, LOUIS

    VISUAL TRAINING IS DEFINED AS THE FIELD OF OCULAR REEDUCATION AND REHABILITATION OF THE VARIOUS VISUAL SKILLS THAT ARE OF PARAMOUNT IMPORTANCE TO SCHOOL ACHIEVEMENT, AUTOMOBILE DRIVING, OUTDOOR SPORTS ACTIVITIES, AND OCCUPATIONAL PURSUITS. A HISTORY OF ORTHOPTICS, THE SUGGESTED NAME FOR THE ENTIRE FIELD OF OCULAR REEDUCATION, IS GIVEN. READING AS…

  16. A Virtual World of Visualization

    NASA Technical Reports Server (NTRS)

    1998-01-01

    In 1990, Sterling Software, Inc., developed the Flow Analysis Software Toolkit (FAST) for NASA Ames on contract. FAST is a workstation based modular analysis and visualization tool. It is used to visualize and animate grids and grid oriented data, typically generated by finite difference, finite element and other analytical methods. FAST is now available through COSMIC, NASA's software storehouse.

  17. Visual Recognition Memory across Contexts

    ERIC Educational Resources Information Center

    Jones, Emily J. H.; Pascalis, Olivier; Eacott, Madeline J.; Herbert, Jane S.

    2011-01-01

    In two experiments, we investigated the development of representational flexibility in visual recognition memory during infancy using the Visual Paired Comparison (VPC) task. In Experiment 1, 6- and 9-month-old infants exhibited recognition when familiarization and test occurred in the same room, but showed no evidence of recognition when…

  18. Visual Literacy and Message Design

    ERIC Educational Resources Information Center

    Pettersson, Rune

    2009-01-01

    Many researchers from different disciplines have explained their views and interpretations and written about visual literacy from their various perspectives. Visual literacy may be applied in almost all areas such as advertising, anatomy, art, biology, business presentations, communication, education, engineering, etc. (Pettersson, 2002a). Despite…

  19. Are Deaf Students Visual Learners?

    ERIC Educational Resources Information Center

    Marschark, Marc; Morrison, Carolyn; Lukomski, Jennifer; Borgna, Georgianna; Convertino, Carol

    2013-01-01

    It is frequently assumed that by virtue of their hearing losses, deaf students are visual learners. Deaf individuals have some visual-spatial advantages relative to hearing individuals, but most have been linked to use of sign language rather than auditory deprivation. How such cognitive differences might affect academic performance has been…

  20. Spatial Visualization by Isometric View

    ERIC Educational Resources Information Center

    Yue, Jianping

    2007-01-01

    Spatial visualization is a fundamental skill in technical graphics and engineering designs. From conventional multiview drawing to modern solid modeling using computer-aided design, visualization skills have always been essential for representing three-dimensional objects and assemblies. Researchers have developed various types of tests to measure…

  1. Visual Marking Inhibits Singleton Capture

    ERIC Educational Resources Information Center

    Olivers, Christian N. L.; Humphreys, Glyn W.

    2003-01-01

    This paper is concerned with how we prioritize the selection of new objects in visual scenes. We present four experiments investigating the effects of distractor previews on visual search through new objects. Participants viewed a set of to-be-ignored nontargets, with the task being to search for a target in a second set, added to the first after…

  2. Why Visual Sequences Come First.

    ERIC Educational Resources Information Center

    Barley, Steven D.

    Visual sequences should be the first visual literacy exercises for reasons that are physio-psychological, semantic, and curricular. In infancy, vision is undifferentiated and undetailed. The number of details a child sees increases with age. Therefore, a series of pictures, rather than one photograph which tells a whole story, is more appropriate…

  3. Learning from Balance Sheet Visualization

    ERIC Educational Resources Information Center

    Tanlamai, Uthai; Soongswang, Oranuj

    2011-01-01

    This exploratory study examines alternative visuals and their effect on the level of learning of balance sheet users. Executive and regular classes of graduate students majoring in information technology in business were asked to evaluate the extent of acceptance and enhanced capability of these alternative visuals toward their learning…

  4. Assessment of the Visually Impaired.

    ERIC Educational Resources Information Center

    Chase, Joan B.

    1985-01-01

    Assessment of visually impaired students is traced historically, and current practices in visual functioning, movement and spatial awareness, cognition, tactile performance, and emotional/social development are noted. Federal requirements for assessment are discussed and recommended assessment practices for six major assessment domains are listed.…

  5. Scaffolding Learning from Molecular Visualizations

    ERIC Educational Resources Information Center

    Chang, Hsin-Yi; Linn, Marcia C.

    2013-01-01

    Powerful online visualizations can make unobservable scientific phenomena visible and improve student understanding. Instead, they often confuse or mislead students. To clarify the impact of molecular visualizations for middle school students we explored three design variations implemented in a Web-based Inquiry Science Environment (WISE) unit on…

  6. Mapping the Visual Communication Field.

    ERIC Educational Resources Information Center

    Moriarty, Sandra E.

    This paper discusses visual communication theory. After an examination of the literature, this study built a more extensive bibliography of visual communication in general--theory, research, teaching--as well as a taxonomy that was better grounded. A 95-page bibliography was developed for works from a number of scholarly journals dealing with…

  7. Haptic Visual Discrimination and Intelligence.

    ERIC Educational Resources Information Center

    McCarron, Lawrence; Horn, Paul W.

    1979-01-01

    The Haptic Visual Discrimination Test of tactual-visual information processing was administered to 39 first-graders, along with standard intelligence, academic potential, and spatial integration tests. Results revealed consistently significant associations between the importance of parieto-occipital areas for organizing sensory data as well as for…

  8. Visual Testing: Searching for Guidelines.

    ERIC Educational Resources Information Center

    Van Gendt, Kitty; Verhagen, Plon

    An experiment was conducted to investigate the influence of the variables "realism" and "context" on the performance of biology students on a visual test about the anatomy of a rat. The instruction was primarily visual with additional verbal information like Latin names and practical information about the learning task: dissecting a rat to gain…

  9. Perceptual Load Alters Visual Excitability

    ERIC Educational Resources Information Center

    Carmel, David; Thorne, Jeremy D.; Rees, Geraint; Lavie, Nilli

    2011-01-01

    Increasing perceptual load reduces the processing of visual stimuli outside the focus of attention, but the mechanism underlying these effects remains unclear. Here we tested an account attributing the effects of perceptual load to modulations of visual cortex excitability. In contrast to stimulus competition accounts, which propose that load…

  10. Visualization of Concurrent Program Executions

    NASA Technical Reports Server (NTRS)

    Artho, Cyrille; Havelund, Klaus; Honiden, Shinichi

    2007-01-01

    Various program analysis techniques are efficient at discovering failures and properties. However, it is often difficult to evaluate results, such as program traces. This calls for abstraction and visualization tools. We propose an approach based on UML sequence diagrams, addressing shortcomings of such diagrams for concurrency. The resulting visualization is expressive and provides all the necessary information at a glance.

  11. VISUAL DEFICIENCIES AND READING DISABILITY.

    ERIC Educational Resources Information Center

    ROSEN, CARL L.

    THE ROLE OF VISUAL SENSORY DEFICIENCIES IN THE CAUSATION READING DISABILITY IS DISCUSSED. PREVIOUS AND CURRENT RESEARCH STUDIES DEALING WITH SPECIFIC VISUAL PROBLEMS WHICH HAVE BEEN FOUND TO BE NEGATIVELY RELATED TO SUCCESSFUL READING ACHIEVEMENT ARE LISTED--(1) FARSIGHTEDNESS, (2) ASTIGMATISM, (3) BINOCULAR INCOORDINATIONS, AND (4) FUSIONAL…

  12. Visual quality beyond artifact visibility

    NASA Astrophysics Data System (ADS)

    Redi, Judith A.

    2013-03-01

    The Electronic imaging community has devoted a lot of effort to the development of technologies that can predict the visual quality of images and videos, as a basis for the delivery of optimal visual quality to the user. These systems have been based for the most part on a visibility-centric approach, assuming the more artifacts are visible, the higher is the annoyance they provoke, the lower the visual quality. Despite the remarkable results achieved with this approach, recently a number of studies suggested that the visibility-centric approach to visual quality might have limitations, and that other factors might influence the overall quality impression of an image or video, depending on cognitive and affective mechanisms that work on top of perception. In particular, interest in the visual content, engagement and context of usage have been found to impact on the overall quality impression of the image/video. In this paper, we review these studies and explore the impact that affective and cognitive processes have on the visual quality. In addition, as a case study, we present the results of an experiment investigating on the impact of aesthetic appeal on visual quality, and we show that users tend to be more demanding in terms of visual quality judging beautiful images.

  13. Visualization of complex automotive data.

    PubMed

    Stevens, Jeffrey A

    2007-01-01

    Making complicated data easier to understand has always been a challenge. Four types of visualization applications (CAD, generalized, specialized, and custom) have successfully been used by automotive manufacturers such as General Motors to help meet this goal. Here are some ways that common processes can be developed for all types of visualization.

  14. Lethally Innocuous Visual Display Units.

    ERIC Educational Resources Information Center

    Cawkell, A. E.

    1991-01-01

    Examines conflicting studies which report on the effects of Visual Display Units (VDU) on health. Five aspects of alleged VDU effects are discussed: (1) radiation or emission effects; (2) visual effects; (3) postural effects; (4) effects on the arms and fingers; and (5) ultrasonic noise from scanning components. (36 references) (MAB)

  15. Manipulating and Visualizing Proteins

    SciTech Connect

    Simon, Horst D.

    2003-12-05

    ProteinShop Gives Researchers a Hands-On Tool for Manipulating, Visualizing Protein Structures. The Human Genome Project and other biological research efforts are creating an avalanche of new data about the chemical makeup and genetic codes of living organisms. But in order to make sense of this raw data, researchers need software tools which let them explore and model data in a more intuitive fashion. With this in mind, researchers at Lawrence Berkeley National Laboratory and the University of California, Davis, have developed ProteinShop, a visualization and modeling program which allows researchers to manipulate protein structures with pinpoint control, guided in large part by their own biological and experimental instincts. Biologists have spent the last half century trying to unravel the ''protein folding problem,'' which refers to the way chains of amino acids physically fold themselves into three-dimensional proteins. This final shape, which resembles a crumpled ribbon or piece of origami, is what determines how the protein functions and translates genetic information. Understanding and modeling this geometrically complex formation is no easy matter. ProteinShop takes a given sequence of amino acids and uses visualization guides to help generate predictions about the secondary structures, identifying alpha helices and flat beta strands, and the coil regions that bind them. Once secondary structures are in place, researchers can twist and turn these pre-configurations until they come up with a number of possible tertiary structure conformations. In turn, these are fed into a computationally intensive optimization procedure that tries to find the final, three-dimensional protein structure. Most importantly, ProteinShop allows users to add human knowledge and intuition to the protein structure prediction process, thus bypassing bad configurations that would otherwise be fruitless for optimization. This saves compute cycles and accelerates the entire process, so

  16. NASA Planetary Visualization Tool

    NASA Astrophysics Data System (ADS)

    Hogan, P.; Kim, R.

    2004-12-01

    NASA World Wind allows one to zoom from satellite altitude into any place on Earth, leveraging the combination of high resolution LandSat imagery and SRTM elevation data to experience Earth in visually rich 3D, just as if they were really there. NASA World Wind combines LandSat 7 imagery with Shuttle Radar Topography Mission (SRTM) elevation data, for a dramatic view of the Earth at eye level. Users can literally fly across the world's terrain from any location in any direction. Particular focus was put into the ease of usability so people of all ages can enjoy World Wind. All one needs to control World Wind is a two button mouse. Additional guides and features can be accessed though a simplified menu. Navigation is automated with single clicks of a mouse as well as the ability to type in any location and automatically zoom to it. NASA World Wind was designed to run on recent PC hardware with the same technology used by today's 3D video games. NASA World Wind delivers the NASA Blue Marble, spectacular true-color imagery of the entire Earth at 1-kilometer-per-pixel. Using NASA World Wind, you can continue to zoom past Blue Marble resolution to seamlessly experience the extremely detailed mosaic of LandSat 7 data at an impressive 15-meters-per-pixel resolution. NASA World Wind also delivers other color bands such as the infrared spectrum. The NASA Scientific Visualization Studio at Goddard Space Flight Center (GSFC) has produced a set of visually intense animations that demonstrate a variety of subjects such as hurricane dynamics and seasonal changes across the globe. NASA World Wind takes these animations and plays them directly on the world. The NASA Moderate Resolution Imaging Spectroradiometer (MODIS) produces a set of time relevant planetary imagery that's updated every day. MODIS catalogs fires, floods, dust, smoke, storms and volcanic activity. NASA World Wind produces an easily customized view of this information and marks them directly on the globe. When one

  17. Tensor visualizations in computational geomechanics

    NASA Astrophysics Data System (ADS)

    Jeremi, Boris; Scheuermann, Gerik; Frey, Jan; Yang, Zhaohui; Hamann, Bernd; Joy, Kenneth I.; Hagen, Hans

    2002-08-01

    We present a novel technique for visualizing tensors in three dimensional (3D) space. Of particular interest is the visualization of stress tensors resulting from 3D numerical simulations in computational geomechanics. To this end we present three different approaches to visualizing tensors in 3D space, namely hedgehogs, hyperstreamlines and hyperstreamsurfaces. We also present a number of examples related to stress distributions in 3D solids subjected to single and load couples. In addition, we present stress visualizations resulting from single-pile and pile-group computations. The main objective of this work is to investigate various techniques for visualizing general Cartesian tensors of rank 2 and it's application to geomechanics problems.

  18. Student Visual Communication of Evolution

    NASA Astrophysics Data System (ADS)

    Oliveira, Alandeom W.; Cook, Kristin

    2016-05-01

    Despite growing recognition of the importance of visual representations to science education, previous research has given attention mostly to verbal modalities of evolution instruction. Visual aspects of classroom learning of evolution are yet to be systematically examined by science educators. The present study attends to this issue by exploring the types of evolutionary imagery deployed by secondary students. Our visual design analysis revealed that students resorted to two larger categories of images when visually communicating evolution: spatial metaphors (images that provided a spatio-temporal account of human evolution as a metaphorical "walk" across time and space) and symbolic representations ("icons of evolution" such as personal portraits of Charles Darwin that simply evoked evolutionary theory rather than metaphorically conveying its conceptual contents). It is argued that students need opportunities to collaboratively critique evolutionary imagery and to extend their visual perception of evolution beyond dominant images.

  19. An Enhanced Visualization Process Model for Incremental Visualization.

    PubMed

    Schulz, Hans-Jorg; Angelini, Marco; Santucci, Giuseppe; Schumann, Heidrun

    2016-07-01

    With today's technical possibilities, a stable visualization scenario can no longer be assumed as a matter of course, as underlying data and targeted display setup are much more in flux than in traditional scenarios. Incremental visualization approaches are a means to address this challenge, as they permit the user to interact with, steer, and change the visualization at intermediate time points and not just after it has been completed. In this paper, we put forward a model for incremental visualizations that is based on the established Data State Reference Model, but extends it in ways to also represent partitioned data and visualization operators to facilitate intermediate visualization updates. In combination, partitioned data and operators can be used independently and in combination to strike tailored compromises between output quality, shown data quantity, and responsiveness-i.e., frame rates. We showcase the new expressive power of this model by discussing the opportunities and challenges of incremental visualization in general and its usage in a real world scenario in particular.

  20. 38 CFR 4.76 - Visual acuity.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Visual acuity. 4.76... DISABILITIES Disability Ratings The Organs of Special Sense § 4.76 Visual acuity. (a) Examination of visual acuity. Examination of visual acuity must include the central uncorrected and corrected visual acuity...

  1. Category selectivity in human visual cortex: Beyond visual object recognition.

    PubMed

    Peelen, Marius V; Downing, Paul E

    2017-04-02

    Human ventral temporal cortex shows a categorical organization, with regions responding selectively to faces, bodies, tools, scenes, words, and other categories. Why is this? Traditional accounts explain category selectivity as arising within a hierarchical system dedicated to visual object recognition. For example, it has been proposed that category selectivity reflects the clustering of category-associated visual feature representations, or that it reflects category-specific computational algorithms needed to achieve view invariance. This visual object recognition framework has gained renewed interest with the success of deep neural network models trained to "recognize" objects: these hierarchical feed-forward networks show similarities to human visual cortex, including categorical separability. We argue that the object recognition framework is unlikely to fully account for category selectivity in visual cortex. Instead, we consider category selectivity in the context of other functions such as navigation, social cognition, tool use, and reading. Category-selective regions are activated during such tasks even in the absence of visual input and even in individuals with no prior visual experience. Further, they are engaged in close connections with broader domain-specific networks. Considering the diverse functions of these networks, category-selective regions likely encode their preferred stimuli in highly idiosyncratic formats; representations that are useful for navigation, social cognition, or reading are unlikely to be meaningfully similar to each other and to varying degrees may not be entirely visual. The demand for specific types of representations to support category-associated tasks may best account for category selectivity in visual cortex. This broader view invites new experimental and computational approaches.

  2. Quality of Visual Cue Affects Visual Reweighting in Quiet Standing

    PubMed Central

    Moraes, Renato; de Freitas, Paulo Barbosa; Razuk, Milena; Barela, José Angelo

    2016-01-01

    Sensory reweighting is a characteristic of postural control functioning adopted to accommodate environmental changes. The use of mono or binocular cues induces visual reduction/increment of moving room influences on postural sway, suggesting a visual reweighting due to the quality of available sensory cues. Because in our previous study visual conditions were set before each trial, participants could adjust the weight of the different sensory systems in an anticipatory manner based upon the reduction in quality of the visual information. Nevertheless, in daily situations this adjustment is a dynamical process and occurs during ongoing movement. The purpose of this study was to examine the effect of visual transitions in the coupling between visual information and body sway in two different distances from the front wall of a moving room. Eleven young adults stood upright inside of a moving room in two distances (75 and 150 cm) wearing a liquid crystal lenses goggles, which allow individual lenses transition from opaque to transparent and vice-versa. Participants stood still during five minutes for each trial and the lenses status changed every one minute (no vision to binocular vision, no vision to monocular vision, binocular vision to monocular vision, and vice-versa). Results showed that farther distance and monocular vision reduced the effect of visual manipulation on postural sway. The effect of visual transition was condition dependent, with a stronger effect when transitions involved binocular vision than monocular vision. Based upon these results, we conclude that the increased distance from the front wall of the room reduced the effect of visual manipulation on postural sway and that sensory reweighting is stimulus quality dependent, with binocular vision producing a much stronger down/up-weighting than monocular vision. PMID:26939058

  3. Structure of visual perception.

    PubMed Central

    Zhang, J; Wu, S Y

    1990-01-01

    The response properties of a class of motion detectors (Reichardt detectors) are investigated extensively here. Since the outputs of the detectors, responding to an image undergoing two-dimensional rigid translation, are dependent on both the image velocity and the image intensity distribution, they are nonuniform across the entire image, even though the object is moving rigidly as a whole. To achieve perceptual "oneness" in the rigid motion, we are led to contend that visual perception must take place in a space that is non-Euclidean in nature. We then derive the affine connection and the metric of this perceptual space. The Riemann curvature tensor is identically zero, which means that the perceptual space is intrinsically flat. A geodesic in this space is composed of points of constant image intensity gradient along a certain direction. The deviation of geodesics (which are perceptually "straight") from physically straight lines may offer an explanation to the perceptual distortion of angular relationships such as the Hering illusion. PMID:2235999

  4. Visualizing renal primary cilia.

    PubMed

    Deane, James A; Verghese, Elizabeth; Martelotto, Luciano G; Cain, Jason E; Galtseva, Alya; Rosenblum, Norman D; Watkins, D Neil; Ricardo, Sharon D

    2013-03-01

    Renal primary cilia are microscopic sensory organelles found on the apical surface of epithelial cells of the nephron and collecting duct. They are based upon a microtubular cytoskeleton, bounded by a specialized membrane, and contain an array of proteins that facilitate their assembly, maintenance and function. Cilium-based signalling is important for the control of epithelial differentiation and has been implicated in the pathogenesis of various cystic kidney diseases and in renal repair. As such, visualizing renal primary cilia and understanding their composition has become an essential component of many studies of inherited kidney disease and mechanisms of epithelial regeneration. Primary cilia were initially identified in the kidney using electron microscopy and this remains a useful technique for the high resolution examination of these organelles. New reagents and techniques now also allow the structure and composition of primary cilia to be analysed in detail using fluorescence microscopy. Primary cilia can be imaged in situ in sections of kidney, and many renal-derived cell lines produce primary cilia in culture providing a simplified and accessible system in which to investigate these organelles. Here we outline microscopy-based techniques commonly used for studying renal primary cilia.

  5. Multiperspective Focus+Context Visualization.

    PubMed

    Wu, Meng-Lin; Popescu, Voicu

    2016-05-01

    Occlusions are a severe bottleneck for the visualization of large and complex datasets. Conventional images only show dataset elements to which there is a direct line of sight, which significantly limits the information bandwidth of the visualization. Multiperspective visualization is a powerful approach for alleviating occlusions to show more than what is visible from a single viewpoint. However, constructing and rendering multiperspective visualizations is challenging. We present a framework for designing multiperspective focus+context visualizations with great flexibility by manipulating the underlying camera model. The focus region viewpoint is adapted to alleviate occlusions. The framework supports multiperspective visualization in three scenarios. In a first scenario, the viewpoint is altered independently for individual image regions to avoid occlusions. In a second scenario, conventional input images are connected into a multiperspective image. In a third scenario, one or several data subsets of interest (i.e., targets) are visualized where they would be seen in the absence of occluders, as the user navigates or the targets move. The multiperspective images are rendered at interactive rates, leveraging the camera model's fast projection operation. We demonstrate the framework on terrain, urban, and molecular biology geometric datasets, as well as on volume rendered density datasets.

  6. Lightness Constancy in Surface Visualization.

    PubMed

    Szafir, Danielle Albers; Sarikaya, Alper; Gleicher, Michael

    2016-09-01

    Color is a common channel for displaying data in surface visualization, but is affected by the shadows and shading used to convey surface depth and shape. Understanding encoded data in the context of surface structure is critical for effective analysis in a variety of domains, such as in molecular biology. In the physical world, lightness constancy allows people to accurately perceive shadowed colors; however, its effectiveness in complex synthetic environments such as surface visualizations is not well understood. We report a series of crowdsourced and laboratory studies that confirm the existence of lightness constancy effects for molecular surface visualizations using ambient occlusion. We provide empirical evidence of how common visualization design decisions can impact viewers' abilities to accurately identify encoded surface colors. These findings suggest that lightness constancy aids in understanding color encodings in surface visualization and reveal a correlation between visualization techniques that improve color interpretation in shadow and those that enhance perceptions of surface depth. These results collectively suggest that understanding constancy in practice can inform effective visualization design.

  7. Effective Visualization of Temporal Ensembles.

    PubMed

    Hao, Lihua; Healey, Christopher G; Bass, Steffen A

    2016-01-01

    An ensemble is a collection of related datasets, called members, built from a series of runs of a simulation or an experiment. Ensembles are large, temporal, multidimensional, and multivariate, making them difficult to analyze. Another important challenge is visualizing ensembles that vary both in space and time. Initial visualization techniques displayed ensembles with a small number of members, or presented an overview of an entire ensemble, but without potentially important details. Recently, researchers have suggested combining these two directions, allowing users to choose subsets of members to visualization. This manual selection process places the burden on the user to identify which members to explore. We first introduce a static ensemble visualization system that automatically helps users locate interesting subsets of members to visualize. We next extend the system to support analysis and visualization of temporal ensembles. We employ 3D shape comparison, cluster tree visualization, and glyph based visualization to represent different levels of detail within an ensemble. This strategy is used to provide two approaches for temporal ensemble analysis: (1) segment based ensemble analysis, to capture important shape transition time-steps, clusters groups of similar members, and identify common shape changes over time across multiple members; and (2) time-step based ensemble analysis, which assumes ensemble members are aligned in time by combining similar shapes at common time-steps. Both approaches enable users to interactively visualize and analyze a temporal ensemble from different perspectives at different levels of detail. We demonstrate our techniques on an ensemble studying matter transition from hadronic gas to quark-gluon plasma during gold-on-gold particle collisions.

  8. Visual discrimination in an orangutan (Pongo pygmaeus): measuring visual preference.

    PubMed

    Hanazuka, Yuki; Kurotori, Hidetoshi; Shimizu, Mika; Midorikawa, Akira

    2012-04-01

    Although previous studies have confirmed that trained orangutans visually discriminate between mammals and artificial objects, whether orangutans without operant conditioning can discriminate remains unknown. The visual discrimination ability in an orangutan (Pongo pygmaeus) with no experience in operant learning was examined using measures of visual preference. Sixteen color photographs of inanimate objects and of mammals with four legs were randomly presented to an orangutan. The results showed that the mean looking time at photographs of mammals with four legs was longer than that for inanimate objects, suggesting that the orangutan discriminated mammals with four legs from inanimate objects. The results implied that orangutans who have not experienced operant conditioning may possess the ability to discriminate visually.

  9. Visual representation of scientific information.

    PubMed

    Wong, Bang

    2011-02-15

    Great technological advances have enabled researchers to generate an enormous amount of data. Data analysis is replacing data generation as the rate-limiting step in scientific research. With this wealth of information, we have an opportunity to understand the molecular causes of human diseases. However, the unprecedented scale, resolution, and variety of data pose new analytical challenges. Visual representation of data offers insights that can lead to new understanding, whether the purpose is analysis or communication. This presentation shows how art, design, and traditional illustration can enable scientific discovery. Examples will be drawn from the Broad Institute's Data Visualization Initiative, aimed at establishing processes for creating informative visualization models.

  10. Perceptual issues in scientific visualization

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K.; Proffitt, Dennis R.

    1989-01-01

    In order to develop effective tools for scientific visulaization, consideration must be given to the perceptual competencies, limitations, and biases of the human operator. Perceptual psychology has amassed a rich body of research on these issues and can lend insight to the development of visualization tehcniques. Within a perceptual psychological framework, the computer display screen can best be thought of as a special kind of impoverished visual environemnt. Guidelines can be gleaned from the psychological literature to help visualization tool designers avoid ambiguities and/or illusions in the resulting data displays.

  11. Information visualization: Beyond traditional engineering

    NASA Technical Reports Server (NTRS)

    Thomas, James J.

    1995-01-01

    This presentation addresses a different aspect of the human-computer interface; specifically the human-information interface. This interface will be dominated by an emerging technology called Information Visualization (IV). IV goes beyond the traditional views of computer graphics, CADS, and enables new approaches for engineering. IV specifically must visualize text, documents, sound, images, and video in such a way that the human can rapidly interact with and understand the content structure of information entities. IV is the interactive visual interface between humans and their information resources.

  12. Visual object affordances: object orientation.

    PubMed

    Symes, Ed; Ellis, Rob; Tucker, Mike

    2007-02-01

    Five experiments systematically investigated whether orientation is a visual object property that affords action. The primary aim was to establish the existence of a pure physical affordance (PPA) of object orientation, independent of any semantic object-action associations or visually salient areas towards which visual attention might be biased. Taken together, the data from these experiments suggest that firstly PPAs of object orientation do exist, and secondly, the behavioural effects that reveal them are larger and more robust when the object appears to be graspable, and is oriented in depth (rather than just frontally) such that its leading edge appears to point outwards in space towards a particular hand of the viewer.

  13. Flight simulator with spaced visuals

    NASA Technical Reports Server (NTRS)

    Gilson, Richard D. (Inventor); Thurston, Marlin O. (Inventor); Olson, Karl W. (Inventor); Ventola, Ronald W. (Inventor)

    1980-01-01

    A flight simulator arrangement wherein a conventional, movable base flight trainer is combined with a visual cue display surface spaced a predetermined distance from an eye position within the trainer. Thus, three degrees of motive freedom (roll, pitch and crab) are provided for a visual proprioceptive, and vestibular cue system by the trainer while the remaining geometric visual cue image alterations are developed by a video system. A geometric approach to computing runway image eliminates a need to electronically compute trigonometric functions, while utilization of a line generator and designated vanishing point at the video system raster permits facile development of the images of the longitudinal edges of the runway.

  14. Visual Perceptual and Visual Motor Performance Differences in Children with Learning Disabilities.

    ERIC Educational Resources Information Center

    Daniels, Linda E.; Wong, Kathy

    1993-01-01

    The scores of 15 children (ages 5-10) with learning disabilities on the Test of Visual Perceptual Skills and the Developmental Test of Visual Motor Integration revealed that visual perception and visual motor skills are separate, though related, functions and that visual motor scores were significantly lower than visual perception scores. (JDD)

  15. MISR Instrument Data Visualization

    NASA Technical Reports Server (NTRS)

    Nelson, David; Garay, Michael; Diner, David; Thompson, Charles; Hall, Jeffrey; Rheingans, Brian; Mazzoni, Dominic

    2008-01-01

    The MISR Interactive eXplorer (MINX) software functions both as a general-purpose tool to visualize Multiangle Imaging SpectroRadiometer (MISR) instrument data, and as a specialized tool to analyze properties of smoke, dust, and volcanic plumes. It includes high-level options to create map views of MISR orbit locations; scrollable, single-camera RGB (red-greenblue) images of MISR level 1B2 (L1B2) radiance data; and animations of the nine MISR camera images that provide a 3D perspective of the scenes that MISR has acquired. NASA Tech Briefs, September 2008 55 The plume height capability provides an accurate estimate of the injection height of plumes that is needed by air quality and climate modelers. MISR provides global high-quality stereo height information, and this program uses that information to perform detailed height retrievals of aerosol plumes. Users can interactively digitize smoke, dust, or volcanic plumes and automatically retrieve heights and winds, and can also archive MISR albedos and aerosol properties, as well as fire power and brightness temperatures associated with smoke plumes derived from Moderate Resolution Imaging Spectroradiometer (MODIS) data. Some of the specialized options in MINX enable the user to do other tasks. Users can display plots of top-of-atmosphere bidirectional reflectance factors (BRFs) versus camera-angle for selected pixels. Images and animations can be saved to disk in various formats. Also, users can apply a geometric registration correction to warp camera images when the standard processing correction is inadequate. It is possible to difference the images of two MISR orbits that share a path (identical ground track), as well as to construct pseudo-color images by assigning different combinations of MISR channels (angle or spectral band) to the RGB display channels. This software is an interactive application written in IDL and compiled into an IDL Virtual Machine (VM) ".sav" file.

  16. A Visual Analytics Agenda

    SciTech Connect

    Thomas, James J.; Cook, Kristin A.

    2006-01-01

    The September 11, 2001 attacks on the World Trade Center and the Pentagon were a wakeup call to the United States. The Hurricane Katrina disaster in August 2005 provided yet another reminder that unprecedented disasters can and do occur. And when they do, we must be able to analyze large amounts of disparate data in order to make sense of exceedingly complex situations and save lives. Responding to an Urgent Need This need to support penetrating analysis of massive data collections is not limited to security, though. From systems biology to human health, from evaluations of product effectiveness to strategizing for competitive positioning, to assessing the results of marketing campaigns, there is a critical need to analyze very large amounts of complex information. Simply put, our ability to collect data far outstrips our ability to analyze the data we have collected. Following the September 11 attacks, the government initiated efforts to evaluate the technologies that are available today or are on the near horizon. Two National Academy of Sciences reports identified serious gaps in the technologies. Making the Nation Safer [Alberts & Wulf, 2002] describes how science and technology can be advanced to protect the nation against terrorism. Information Technology for Counterterrorism [Hennessy et al., 2003] expands upon the work of Making the Nation Safer, focusing specifically on the opportunities for information technology to help counter and respond to terrorist attacks. Significant research progress has been made in disciplines such as scientific and information visualization, statistically-based exploratory and confirmatory analysis, data and knowledge representations, and perceptual and cognitive sciences, However, the research community has not adequately addressed the integration of these subspecialties to advance the ability for analysts to apply their expert human judgment to complex data in pressure-filled situations. Although some research is being done

  17. Visual rehabilitation: visual scanning, multisensory stimulation and vision restoration trainings

    PubMed Central

    Dundon, Neil M.; Bertini, Caterina; Làdavas, Elisabetta; Sabel, Bernhard A.; Gall, Carolin

    2015-01-01

    Neuropsychological training methods of visual rehabilitation for homonymous vision loss caused by postchiasmatic damage fall into two fundamental paradigms: “compensation” and “restoration”. Existing methods can be classified into three groups: Visual Scanning Training (VST), Audio-Visual Scanning Training (AViST) and Vision Restoration Training (VRT). VST and AViST aim at compensating vision loss by training eye scanning movements, whereas VRT aims at improving lost vision by activating residual visual functions by training light detection and discrimination of visual stimuli. This review discusses the rationale underlying these paradigms and summarizes the available evidence with respect to treatment efficacy. The issues raised in our review should help guide clinical care and stimulate new ideas for future research uncovering the underlying neural correlates of the different treatment paradigms. We propose that both local “within-system” interactions (i.e., relying on plasticity within peri-lesional spared tissue) and changes in more global “between-system” networks (i.e., recruiting alternative visual pathways) contribute to both vision restoration and compensatory rehabilitation, which ultimately have implications for the rehabilitation of cognitive functions. PMID:26283935

  18. The climate visualizer: Sense-making through scientific visualization

    NASA Astrophysics Data System (ADS)

    Gordin, Douglas N.; Polman, Joseph L.; Pea, Roy D.

    1994-12-01

    This paper describes the design of a learning environment, called the Climate Visualizer, intended to facilitate scientific sense-making in high school classrooms by providing students the ability to craft, inspect, and annotate scientific visualizations. The theoretical back-ground for our design presents a view of learning as acquiring and critiquing cultural practices and stresses the need for students to appropriate the social and material aspects of practice when learning an area. This is followed by a description of the design of the Climate Visualizer, including detailed accounts of its provision of spatial and temporal context and the quantitative and visual representations it employs. A broader context is then explored by describing its integration into the high school science classroom. This discussion explores how visualizations can promote the creation of scientific theories, especially in conjunction with the Collaboratory Notebook, an embedded environment for creating and critiquing scientific theories and visualizations. Finally, we discuss the design trade-offs we have made in light of our theoretical orientation, and our hopes for further progress.

  19. Visual Field Map Clusters in Macaque Extrastriate Visual Cortex

    PubMed Central

    Kolster, Hauke; Mandeville, Joseph B.; Arsenault, John T.; Ekstrom, Leeland B.; Wald, Lawrence L.; Vanduffel, Wim

    2009-01-01

    The macaque visual cortex contains more than 30 different functional visual areas, yet surprisingly little is known about the underlying organizational principles that structure its components into a complete ‘visual’ unit. A recent model of visual cortical organization in humans suggests that visual field maps are organized as clusters. Clusters minimize axonal connections between individual field maps that represent common visual percepts, with different clusters thought to carry out different functions. Experimental support for this hypothesis, however, is lacking in macaques, leaving open the question of whether it is unique to humans or a more general model for primate vision. Here we show, using high-resolution BOLD fMRI data in the awake monkey at 7 Tesla, that area MT/V5 and its neighbors are organized as a cluster with a common foveal representation and a circular eccentricity map. This novel view on the functional topography of area MT/V5 and satellites indicates that field map clusters are evolutionarily preserved and may be a fundamental organizational principle of the old world primate visual cortex. PMID:19474330

  20. Visual syntax of the DRAGON language

    SciTech Connect

    Parondzhanov, V.D.

    1995-05-01

    A method for flowchart formalization and nonclassical structurization called DRAGON is suggested. The family of DRAGON visual programming languages is presented. The visual language syntax is described.

  1. Visual Handicaps of Mentally Handicapped People.

    ERIC Educational Resources Information Center

    Ellis, David

    1979-01-01

    Recent literature concerning visual handicaps of mentally handicapped people is reviewed. Topic areas considered are etiology and epidemiology, visual acuity, color vision, and educational techniques. (Author)

  2. Do visual illusions probe the visual brain? Illusions in action without a dorsal visual stream.

    PubMed

    Coello, Yann; Danckert, James; Blangero, Annabelle; Rossetti, Yves

    2007-04-09

    Visual illusions have been shown to affect perceptual judgements more so than motor behaviour, which was interpreted as evidence for a functional division of labour within the visual system. The dominant perception-action theory argues that perception involves a holistic processing of visual objects or scenes, performed within the ventral, inferior temporal cortex. Conversely, visuomotor action involves the processing of the 3D relationship between the goal of the action and the body, performed predominantly within the dorsal, posterior parietal cortex. We explored the effect of well-known visual illusions (a size-contrast illusion and the induced Roelofs effect) in a patient (IG) suffering bilateral lesions of the dorsal visual stream. According to the perception-action theory, IG's perceptual judgements and control of actions should rely on the intact ventral stream and hence should both be sensitive to visual illusions. The finding that IG performed similarly to controls in three different illusory contexts argues against such expectations and shows, furthermore, that the dorsal stream does not control all aspects of visuomotor behaviour. Assuming that the patient's dorsal stream visuomotor system is fully lesioned, these results suggest that her visually guided action can be planned and executed independently of the dorsal pathways, possibly through the inferior parietal lobule.

  3. Signature Visualization of Software Binaries

    SciTech Connect

    Panas, T

    2008-07-01

    In this paper we present work on the visualization of software binaries. In particular, we utilize ROSE, an open source compiler infrastructure, to pre-process software binaries, and we apply a landscape metaphor to visualize the signature of each binary (malware). We define the signature of a binary as a metric-based layout of the functions contained in the binary. In our initial experiment, we visualize the signatures of a series of computer worms that all originate from the same line. These visualizations are useful for a number of reasons. First, the images reveal how the archetype has evolved over a series of versions of one worm. Second, one can see the distinct changes between version. This allows the viewer to form conclusions about the development cycle of a particular worm.

  4. Dynamic visualization of data streams

    DOEpatents

    Wong, Pak Chung; Foote, Harlan P.; Adams, Daniel R.; Cowley, Wendy E.; Thomas, James J.

    2009-07-07

    One embodiment of the present invention includes a data communication subsystem to receive a data stream, and a data processing subsystem responsive to the data communication subsystem to generate a visualization output based on a group of data vectors corresponding to a first portion of the data stream. The processing subsystem is further responsive to a change in rate of receipt of the data to modify the visualization output with one or more other data vectors corresponding to a second portion of the data stream as a function of eigenspace defined with the group of data vectors. The system further includes a display device responsive to the visualization output to provide a corresponding visualization.

  5. Visual hallucinations associated with zonisamide.

    PubMed

    Akman, Cigdem I; Goodkin, Howard P; Rogers, Donald P; Riviello, James J

    2003-01-01

    Zonisamide is a broad-spectrum antiepileptic drug used to treat various types of seizures. Although visual hallucinations have not been reported as an adverse effect of this agent, we describe three patients who experienced complex visual hallucinations and altered mental status after zonisamide treatment was begun or its dosage increased. All three had been diagnosed earlier with epilepsy, and their electroencephalogram (EEG) findings were abnormal. During monitoring, visual hallucinations did not correlate with EEG readings, nor did video recording capture any of the described events. None of the patients had experienced visual hallucinations before this event. The only recent change in their treatment was the introduction or increased dosage of zonisamide. With either discontinuation or decreased dosage of the drug the symptoms disappeared and did not recur. Further observations and reports will help clarify this adverse effect. Until then, clinicians need to be aware of this possible complication associated with zonisamide.

  6. Phenomenological model of visual acuity

    NASA Astrophysics Data System (ADS)

    Gómez-Pedrero, José A.; Alonso, José

    2016-12-01

    We propose in this work a model for describing visual acuity (V) as a function of defocus and pupil diameter. Although the model is mainly based on geometrical optics, it also incorporates nongeometrical effects phenomenologically. Compared to similar visual acuity models, the proposed one considers the effect of astigmatism and the variability of best corrected V among individuals; it also takes into account the accommodation and the "tolerance to defocus," the latter through a phenomenological parameter. We have fitted the model to the V data provided in the works of Holladay et al. and Peters, showing the ability of this model to accurately describe the variation of V against blur and pupil diameter. We have also performed a comparison between the proposed model and others previously published in the literature. The model is mainly intended for use in the design of ophthalmic compensations, but it can also be useful in other fields such as visual ergonomics, design of visual tests, and optical instrumentation.

  7. Visual acuity and test performance.

    PubMed

    Heron, E; Zytkoskee, A

    1981-02-01

    Evaluation of scholastic achievement (American College Testing Service) test scores confirms previous reports that persons with poor visual acuity perform better on these tests than individuals with normal or superior acuity.

  8. Visualizing Dynamic Data with Maps.

    PubMed

    Mashima, Daisuke; Kobourov, Stephen G; Hu, Yifan

    2012-09-01

    Maps offer a familiar way to present geographic data (continents, countries), and additional information (topography, geology), can be displayed with the help of contours and heat-map overlays. In this paper, we consider visualizing large-scale dynamic relational data by taking advantage of the geographic map metaphor. We describe a map-based visualization system which uses animation to convey dynamics in large data sets, and which aims to preserve the viewer's mental map while also offering readable views at all times. Our system is fully functional and has been used to visualize user traffic on the Internet radio station last.fm, as well as TV-viewing patterns from an IPTV service. All map images in this paper are available in high-resolution at [1] as are several movies illustrating the dynamic visualization.

  9. Supporting Web Search with Visualization

    NASA Astrophysics Data System (ADS)

    Hoeber, Orland; Yang, Xue Dong

    One of the fundamental goals of Web-based support systems is to promote and support human activities on the Web. The focus of this Chapter is on the specific activities associated with Web search, with special emphasis given to the use of visualization to enhance the cognitive abilities of Web searchers. An overview of information retrieval basics, along with a focus on Web search and the behaviour of Web searchers is provided. Information visualization is introduced as a means for supporting users as they perform their primary Web search tasks. Given the challenge of visualizing the primarily textual information present in Web search, a taxonomy of the information that is available to support these tasks is given. The specific challenges of representing search information are discussed, and a survey of the current state-of-the-art in visual Web search is introduced. This Chapter concludes with our vision for the future of Web search.

  10. Distortions of posterior visual space.

    PubMed

    Phillips, Flip; Voshell, Martin G

    2009-01-01

    The study of spatial vision is a long and well traveled road (which, of course, converges to a vanishing point at the horizon). Its various distortions have been widely investigated empirically, and most concentrate, pragmatically, on the space anterior to the observer. The visual world behind the observer has received relatively less attention and it is this perspective the current experiments address. Our results show systematic perceptual distortions in the posterior visual world when viewed statically. Under static viewing conditions, observer's perceptual representation was consistently 'spread' in a hyperbolic fashion. Directions to distant, peripheral locations were consistently overestimated by about 11 degrees from the ground truth and this variability increased as the target was moved toward the center of the observer's back. The perceptual representation of posterior visual space is, no doubt, secondary to the more immediate needs of the anterior visual world. Still, it is important in some domains including certain sports, such as rowing, and in vehicular navigation.

  11. FAITH Water Channel Flow Visualization

    NASA Video Gallery

    Water channel flow visualization experiments are performed on a three dimensional model of a small hill. This experiment was part of a series of measurements of the complex fluid flow around the hi...

  12. Engineering visualization utilizing advanced animation

    NASA Technical Reports Server (NTRS)

    Sabionski, Gunter R.; Robinson, Thomas L., Jr.

    1989-01-01

    Engineering visualization is the use of computer graphics to depict engineering analysis and simulation in visual form from project planning through documentation. Graphics displays let engineers see data represented dynamically which permits the quick evaluation of results. The current state of graphics hardware and software generally allows the creation of two types of 3D graphics. The use of animated video as an engineering visualization tool is presented. The engineering, animation, and videography aspects of animated video production are each discussed. Specific issues include the integration of staffing expertise, hardware, software, and the various production processes. A detailed explanation of the animation process reveals the capabilities of this unique engineering visualization method. Automation of animation and video production processes are covered and future directions are proposed.

  13. Visual direction finding by fishes

    NASA Technical Reports Server (NTRS)

    Waterman, T. H.

    1972-01-01

    The use of visual orientation, in the absence of landmarks, for underwater direction finding exercises by fishes is reviewed. Celestial directional clues observed directly near the water surface or indirectly at an asymptatic depth are suggested as possible orientation aids.

  14. Visual Data Analysis for Satellites

    NASA Technical Reports Server (NTRS)

    Lau, Yee; Bhate, Sachin; Fitzpatrick, Patrick

    2008-01-01

    The Visual Data Analysis Package is a collection of programs and scripts that facilitate visual analysis of data available from NASA and NOAA satellites, as well as dropsonde, buoy, and conventional in-situ observations. The package features utilities for data extraction, data quality control, statistical analysis, and data visualization. The Hierarchical Data Format (HDF) satellite data extraction routines from NASA's Jet Propulsion Laboratory were customized for specific spatial coverage and file input/output. Statistical analysis includes the calculation of the relative error, the absolute error, and the root mean square error. Other capabilities include curve fitting through the data points to fill in missing data points between satellite passes or where clouds obscure satellite data. For data visualization, the software provides customizable Generic Mapping Tool (GMT) scripts to generate difference maps, scatter plots, line plots, vector plots, histograms, timeseries, and color fill images.

  15. Reconfigurable Auditory-Visual Display

    NASA Technical Reports Server (NTRS)

    Begault, Durand R. (Inventor); Anderson, Mark R. (Inventor); McClain, Bryan (Inventor); Miller, Joel D. (Inventor)

    2008-01-01

    System and method for visual and audible communication between a central operator and N mobile communicators (N greater than or equal to 2), including an operator transceiver and interface, configured to receive and display, for the operator, visually perceptible and audibly perceptible signals from each of the mobile communicators. The interface (1) presents an audible signal from each communicator as if the audible signal is received from a different location relative to the operator and (2) allows the operator to select, to assign priority to, and to display, the visual signals and the audible signals received from a specified communicator. Each communicator has an associated signal transmitter that is configured to transmit at least one of the visual signals and the audio signal associated with the communicator, where at least one of the signal transmitters includes at least one sensor that senses and transmits a sensor value representing a selected environmental or physiological parameter associated with the communicator.

  16. Virtual Environments in Scientific Visualization

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Lisinski, T. A. (Technical Monitor)

    1994-01-01

    Virtual environment technology is a new way of approaching the interface between computers and humans. Emphasizing display and user control that conforms to the user's natural ways of perceiving and thinking about space, virtual environment technologies enhance the ability to perceive and interact with computer generated graphic information. This enhancement potentially has a major effect on the field of scientific visualization. Current examples of this technology include the Virtual Windtunnel being developed at NASA Ames Research Center. Other major institutions such as the National Center for Supercomputing Applications and SRI International are also exploring this technology. This talk will be describe several implementations of virtual environments for use in scientific visualization. Examples include the visualization of unsteady fluid flows (the virtual windtunnel), the visualization of geodesics in curved spacetime, surface manipulation, and examples developed at various laboratories.

  17. JavaScript: Data Visualizations

    EPA Pesticide Factsheets

    D3 is a JavaScript library that, in a manner similar to jQuery library, allows direct inspection and manipulation of the Document Object Model, but is intended for the primary purpose of data visualization.

  18. Visual cues for data mining

    NASA Astrophysics Data System (ADS)

    Rogowitz, Bernice E.; Rabenhorst, David A.; Gerth, John A.; Kalin, Edward B.

    1996-04-01

    This paper describes a set of visual techniques, based on principles of human perception and cognition, which can help users analyze and develop intuitions about tabular data. Collections of tabular data are widely available, including, for example, multivariate time series data, customer satisfaction data, stock market performance data, multivariate profiles of companies and individuals, and scientific measurements. In our approach, we show how visual cues can help users perform a number of data mining tasks, including identifying correlations and interaction effects, finding clusters and understanding the semantics of cluster membership, identifying anomalies and outliers, and discovering multivariate relationships among variables. These cues are derived from psychological studies on perceptual organization, visual search, perceptual scaling, and color perception. These visual techniques are presented as a complement to the statistical and algorithmic methods more commonly associated with these tasks, and provide an interactive interface for the human analyst.

  19. Solar System Visualization (SSV) Project

    NASA Technical Reports Server (NTRS)

    Todd, Jessida L.

    2005-01-01

    The Solar System Visualization (SSV) project aims at enhancing scientific and public understanding through visual representations and modeling procedures. The SSV project's objectives are to (1) create new visualization technologies, (2) organize science observations and models, and (3) visualize science results and mission Plans. The SSV project currently supports the Mars Exploration Rovers (MER) mission, the Mars Reconnaissance Orbiter (MRO), and Cassini. In support of the these missions, the SSV team has produced pan and zoom animations of large mosaics to reveal details of surface features and topography, created 3D animations of science instruments and procedures, formed 3-D anaglyphs from left and right stereo pairs, and animated registered multi-resolution mosaics to provide context for microscopic images.

  20. Visualizing brain rhythms and synchrony

    NASA Astrophysics Data System (ADS)

    Robbins, Kay A.; Veljkovic, Dragana; Pilipaviciute, Egle

    2006-01-01

    Patterns of synchronized brain activity have been widely observed in EEGs and multi-electrode recordings, and much study has been devoted to understanding their role in brain function. We introduce the problem of visualization of synchronized behavior and propose visualization techniques for assessing temporal and spatial patterns of synchronization from data. We discuss spike rate plots, activity succession diagrams, space-time activity band visualization, and low-dimensional projections as methods for identifying synchronized behavior in populations of neurons and for detecting the possibly short-lived neuronal assemblies that produced them. We use wavelets conjunction with these visualization techniques to extract the frequency and temporal localization of synchronized behavior. Most of these techniques can be streamed, making them suitable for analyzing long-running experimental recordings as well as the output of simulation models.

  1. Developing Tests of Visual Dependency

    NASA Technical Reports Server (NTRS)

    Kindrat, Alexandra N.

    2011-01-01

    Astronauts develop neural adaptive responses to microgravity during space flight. Consequently these adaptive responses cause maladaptive disturbances in balance and gait function when astronauts return to Earth and are re-exposed to gravity. Current research in the Neuroscience Laboratories at NASA-JSC is focused on understanding how exposure to space flight produces post-flight disturbances in balance and gait control and developing training programs designed to facilitate the rapid recovery of functional mobility after space flight. In concert with these disturbances, astronauts also often report an increase in their visual dependency during space flight. To better understand this phenomenon, studies were conducted with specially designed training programs focusing on visual dependency with the aim to understand and enhance subjects ability to rapidly adapt to novel sensory situations. The Rod and Frame test (RFT) was used first to assess an individual s visual dependency, using a variety of testing techniques. Once assessed, subjects were asked to perform two novel tasks under transformation (both the Pegboard and Cube Construction tasks). Results indicate that head position cues and initial visual test conditions had no effect on an individual s visual dependency scores. Subjects were also able to adapt to the manual tasks after several trials. Individual visual dependency correlated with ability to adapt manual to a novel visual distortion only for the cube task. Subjects with higher visual dependency showed decreased ability to adapt to this task. Ultimately, it was revealed that the RFT may serve as an effective prediction tool to produce individualized adaptability training prescriptions that target the specific sensory profile of each crewmember.

  2. Data mining and visualization techniques

    DOEpatents

    Wong, Pak Chung; Whitney, Paul; Thomas, Jim

    2004-03-23

    Disclosed are association rule identification and visualization methods, systems, and apparatus. An association rule in data mining is an implication of the form X.fwdarw.Y where X is a set of antecedent items and Y is the consequent item. A unique visualization technique that provides multiple antecedent, consequent, confidence, and support information is disclosed to facilitate better presentation of large quantities of complex association rules.

  3. Information Visualization for Biological Data.

    PubMed

    Czauderna, Tobias; Schreiber, Falk

    2017-01-01

    Visualization is a powerful method to present and explore a large amount of data. It is increasingly important in the life sciences and is used for analyzing different types of biological data, such as structural information, high-throughput data, and biochemical networks. This chapter gives a brief introduction to visualization methods for bioinformatics, presents two commonly used techniques in detail, and discusses a graphical standard for biological networks and cellular processes.

  4. Alternative representations of visual space

    NASA Technical Reports Server (NTRS)

    Arditi, Aries

    1989-01-01

    This paper discusses a method for delineating and testing hypotheses about the relationship between the retinal images and the three-dimensional visual space they serve. The method may be used under the conditions of changing eye position, occlusion by structures that are part of or are mounted on the observer, occlusions by environmental objects, defects of the visual field, and variables that alter the focus of environmental imagery on the retinas.

  5. Visual Analytics Technology Transition Progress

    SciTech Connect

    Scholtz, Jean; Cook, Kristin A.; Whiting, Mark A.; Lemon, Douglas K.; Greenblatt, Howard

    2009-09-23

    The authors provide a description of the transition process for visual analytic tools and contrast this with the transition process for more traditional software tools. This paper takes this into account and describes a user-oriented approach to technology transition including a discussion of key factors that should be considered and adapted to each situation. The progress made in transitioning visual analytic tools in the past five years is described and the challenges that remain are enumerated.

  6. Visual inspection for CTBT verification

    SciTech Connect

    Hawkins, W.; Wohletz, K.

    1997-03-01

    On-site visual inspection will play an essential role in future Comprehensive Test Ban Treaty (CTBT) verification. Although seismic and remote sensing techniques are the best understood and most developed methods for detection of evasive testing of nuclear weapons, visual inspection can greatly augment the certainty and detail of understanding provided by these more traditional methods. Not only can visual inspection offer ``ground truth`` in cases of suspected nuclear testing, but it also can provide accurate source location and testing media properties necessary for detailed analysis of seismic records. For testing in violation of the CTBT, an offending party may attempt to conceal the test, which most likely will be achieved by underground burial. While such concealment may not prevent seismic detection, evidence of test deployment, location, and yield can be disguised. In this light, if a suspicious event is detected by seismic or other remote methods, visual inspection of the event area is necessary to document any evidence that might support a claim of nuclear testing and provide data needed to further interpret seismic records and guide further investigations. However, the methods for visual inspection are not widely known nor appreciated, and experience is presently limited. Visual inspection can be achieved by simple, non-intrusive means, primarily geological in nature, and it is the purpose of this report to describe the considerations, procedures, and equipment required to field such an inspection.

  7. Illusions can warp visual space.

    PubMed

    Smeets, Jeroen B J; Sousa, Rita; Brenner, Eli

    2009-01-01

    Our perception of the space around us is not veridical. It has been reported that the systematic errors in our perception of visual space can be described by a reasonably well-behaving space (the resulting space is approximately projective and complies with an affine geometry). The evidence for this is that the perceived centre of a set of points is independent of the order of the steps taken to construct it. We investigated whether this is also the case in displays with well-known 2-D visual illusions. In two examples (Judd and Poggendorff illusions), we show that the perceptual centre of a set of points depends on how this centre is constructed. The misperceptions induced by visual illusions are thus of a different nature than our everyday misperceptions. We argue that the concept of perceived visual space is not very useful for describing human behaviour. We propose an alternative description whereby illusions do not deform a visual space, but only a single visual attribute, leaving other attributes unaffected.

  8. [Visual Texture Agnosia in Humans].

    PubMed

    Suzuki, Kyoko

    2015-06-01

    Visual object recognition requires the processing of both geometric and surface properties. Patients with occipital lesions may have visual agnosia, which is impairment in the recognition and identification of visually presented objects primarily through their geometric features. An analogous condition involving the failure to recognize an object by its texture may exist, which can be called visual texture agnosia. Here we present two cases with visual texture agnosia. Case 1 had left homonymous hemianopia and right upper quadrantanopia, along with achromatopsia, prosopagnosia, and texture agnosia, because of damage to his left ventromedial occipitotemporal cortex and right lateral occipito-temporo-parietal cortex due to multiple cerebral embolisms. Although he showed difficulty matching and naming textures of real materials, he could readily name visually presented objects by their contours. Case 2 had right lower quadrantanopia, along with impairment in stereopsis and recognition of texture in 2D images, because of subcortical hemorrhage in the left occipitotemporal region. He failed to recognize shapes based on texture information, whereas shape recognition based on contours was well preserved. Our findings, along with those of three reported cases with texture agnosia, indicate that there are separate channels for processing texture, color, and geometric features, and that the regions around the left collateral sulcus are crucial for texture processing.

  9. Saccade Adaptation and Visual Uncertainty

    PubMed Central

    Souto, David; Gegenfurtner, Karl R.; Schütz, Alexander C.

    2016-01-01

    Visual uncertainty may affect saccade adaptation in two complementary ways. First, an ideal adaptor should take into account the reliability of visual information for determining the amount of correction, predicting that increasing visual uncertainty should decrease adaptation rates. We tested this by comparing observers' direction discrimination and adaptation rates in an intra-saccadic-step paradigm. Second, clearly visible target steps may generate a slower adaptation rate since the error can be attributed to an external cause, instead of an internal change in the visuo-motor mapping that needs to be compensated. We tested this prediction by measuring saccade adaptation to different step sizes. Most remarkably, we found little correlation between estimates of visual uncertainty and adaptation rates and no slower adaptation rates with more visible step sizes. Additionally, we show that for low contrast targets backward steps are perceived as stationary after the saccade, but that adaptation rates are independent of contrast. We suggest that the saccadic system uses different position signals for adapting dysmetric saccades and for generating a trans-saccadic stable visual percept, explaining that saccade adaptation is found to be independent of visual uncertainty. PMID:27252635

  10. Novel Scientific Visualization Interfaces for Interactive Information Visualization and Sharing

    NASA Astrophysics Data System (ADS)

    Demir, I.; Krajewski, W. F.

    2012-12-01

    As geoscientists are confronted with increasingly massive datasets from environmental observations to simulations, one of the biggest challenges is having the right tools to gain scientific insight from the data and communicate the understanding to stakeholders. Recent developments in web technologies make it easy to manage, visualize and share large data sets with general public. Novel visualization techniques and dynamic user interfaces allow users to interact with data, and modify the parameters to create custom views of the data to gain insight from simulations and environmental observations. This requires developing new data models and intelligent knowledge discovery techniques to explore and extract information from complex computational simulations or large data repositories. Scientific visualization will be an increasingly important component to build comprehensive environmental information platforms. This presentation provides an overview of the trends and challenges in the field of scientific visualization, and demonstrates information visualization and communication tools in the Iowa Flood Information System (IFIS), developed within the light of these challenges. The IFIS is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to and visualization of flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, and other flood-related data for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS provides community-centric watershed and river characteristics, weather (rainfall) conditions, and streamflow data and visualization tools. Interactive interfaces allow access to inundation maps for different stage and return period values, and flooding scenarios with contributions from multiple rivers. Real-time and historical data of water levels, gauge heights, and

  11. A Link between Visual Disambiguation and Visual Memory

    PubMed Central

    Hegdé, Jay; Kersten, Daniel

    2011-01-01

    Sensory information in the retinal image is typically too ambiguous to support visual object recognition by itself. Theories of visual disambiguation posit that in order to disambiguate, and thus interpret, the incoming images, the visual system must integrate the sensory information with prior knowledge of the visual world. However, the underlying neural mechanisms remain unclear. Using functional magnetic resonance imaging (fMRI) of human subjects, we have found evidence for functional specialization for storing disambiguating information in memory vs. interpreting incoming ambiguous images. Subjects viewed two-tone, ‘Mooney’ images, which are typically ambiguous when seen for the first time, but are quickly disambiguated after viewing the corresponding unambiguous color images. Activity in one set of regions, including a region in the medial parietal cortex previously reported to play a key role in Mooney image disambiguation, closely reflected memory for previously seen color images, but not the subsequent disambiguation of Mooney images. A second set of regions, including the superior temporal sulcus, showed the opposite pattern, in that their responses closely reflected the subjects’ percepts of the disambiguated Mooney images on a stimulus-to-stimulus basis, but not the memory of the corresponding color images. Functional connectivity between the two sets of regions was stronger during those trials in which the disambiguated percept was stronger. This functional interaction between brain regions that specialize in storing disambiguating information in memory vs. interpreting incoming ambiguous images may represent a general mechanism by which prior knowledge disambiguates visual sensory information. PMID:21068318

  12. Head Tracking of Auditory, Visual, and Audio-Visual Targets

    PubMed Central

    Leung, Johahn; Wei, Vincent; Burgess, Martin; Carlile, Simon

    2016-01-01

    The ability to actively follow a moving auditory target with our heads remains unexplored even though it is a common behavioral response. Previous studies of auditory motion perception have focused on the condition where the subjects are passive. The current study examined head tracking behavior to a moving auditory target along a horizontal 100° arc in the frontal hemisphere, with velocities ranging from 20 to 110°/s. By integrating high fidelity virtual auditory space with a high-speed visual presentation we compared tracking responses of auditory targets against visual-only and audio-visual “bisensory” stimuli. Three metrics were measured—onset, RMS, and gain error. The results showed that tracking accuracy (RMS error) varied linearly with target velocity, with a significantly higher rate in audition. Also, when the target moved faster than 80°/s, onset and RMS error were significantly worst in audition the other modalities while responses in the visual and bisensory conditions were statistically identical for all metrics measured. Lastly, audio-visual facilitation was not observed when tracking bisensory targets. PMID:26778952

  13. Which visual functions depend on intermediate visual regions? Insights from a case of developmental visual form agnosia.

    PubMed

    Gilaie-Dotan, Sharon

    2016-03-01

    A key question in visual neuroscience is the causal link between specific brain areas and perceptual functions; which regions are necessary for which visual functions? While the contribution of primary visual cortex and high-level visual regions to visual perception has been extensively investigated, the contribution of intermediate visual areas (e.g. V2/V3) to visual processes remains unclear. Here I review more than 20 visual functions (early, mid, and high-level) of LG, a developmental visual agnosic and prosopagnosic young adult, whose intermediate visual regions function in a significantly abnormal fashion as revealed through extensive fMRI and ERP investigations. While expectedly, some of LG's visual functions are significantly impaired, some of his visual functions are surprisingly normal (e.g. stereopsis, color, reading, biological motion). During the period of eight-year testing described here, LG trained on a perceptual learning paradigm that was successful in improving some but not all of his visual functions. Following LG's visual performance and taking into account additional findings in the field, I propose a framework for how different visual areas contribute to different visual functions, with an emphasis on intermediate visual regions. Thus, although rewiring and plasticity in the brain can occur during development to overcome and compensate for hindering developmental factors, LG's case seems to indicate that some visual functions are much less dependent on strict hierarchical flow than others, and can develop normally in spite of abnormal mid-level visual areas, thereby probably less dependent on intermediate visual regions.

  14. Body image, visual working memory and visual mental imagery.

    PubMed

    Darling, Stephen; Uytman, Clare; Allen, Richard J; Havelka, Jelena; Pearson, David G

    2015-01-01

    Body dissatisfaction (BD) is a highly prevalent feature amongst females in society, with the majority of individuals regarding themselves to be overweight compared to their personal ideal, and very few self-describing as underweight. To date, explanations of this dramatic pattern have centred on extrinsic social and media factors, or intrinsic factors connected to individuals' knowledge and belief structures regarding eating and body shape, with little research examining links between BD and basic cognitive mechanisms. This paper reports a correlational study in which visual and executive cognitive processes that could potentially impact on BD were assessed. Visual memory span and self-rated visual imagery were found to be predictive of BD, alongside a measure of inhibition derived from the Stroop task. In contrast, spatial memory and global precedence were not related to BD. Results are interpreted with reference to the influential multi-component model of working memory.

  15. Visual acuity, color vision, and visual search performance at sea.

    PubMed

    Donderi, D C

    1994-03-01

    Visual acuity and color vision were tested during a search and rescue exercise at sea. Fifty-seven watchkeepers searched for orange and yellow life rafts during daylight and for lighted and unlighted life rafts at night with night vision goggles. There were 588 individual watches of one hour each. Measures of wind, waves, and weather were used as covariates. Daytime percentage detection was positively correlated with low-contrast visual acuity and negatively correlated with error scores on Dvorine pseudoisochromatic plates and the Farnsworth color test. Performance was better during the first half-hour of the watch. Efficiency calculations show that color vision selective screening at one standard deviation above the mean would increase daylight search performance by 10% and that one standard deviation visual acuity selection screening would increase performance by 12%. There was no relationship between either acuity or color vision and life raft detection using night vision goggles.

  16. Visual snow: A thalamocortical dysrhythmia of the visual pathway?

    PubMed

    Lauschke, Jenny L; Plant, Gordon T; Fraser, Clare L

    2016-06-01

    In this paper we review the visual snow (VS) characteristics of a case cohort of 32 patients. History of symptoms and associated co-morbidities, ophthalmic examination, previous investigations and the results of intuitive colourimetry were collected and reviewed. VS symptoms follow a stereotypical description and are strongly associated with palinopsia, migraine and tinnitus, but also tremor. The condition is a chronic one and often results in misdiagnosis with psychiatric disorders or malingering. Colour filters, particularly in the yellow-blue colour spectrum, subjectively reduced symptoms of VS. There is neurobiological evidence for the syndrome of VS that links it with other disorders of visual and sensory processing such as migraine and tinnitus. Colour filters in the blue-yellow spectrum may alter the koniocellular pathway processing, which has a regulatory effect on background electroencephalographic rhythms, and may add weight to the hypothesis that VS is a thalamocortical dysrhythmia of the visual pathway.

  17. 49 CFR 213.365 - Visual inspections.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Visual inspections. 213.365 Section 213.365... Visual inspections. (a) All track shall be visually inspected in accordance with the schedule prescribed..., electrical, and other track inspection devices may be used to supplement visual inspection. If a vehicle...

  18. 38 CFR 4.77 - Visual fields.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Visual fields. 4.77... DISABILITIES Disability Ratings The Organs of Special Sense § 4.77 Visual fields. (a) Examination of visual... who are well adapted to intraocular lens implant or contact lens correction, visual field...

  19. Development of a Computerized Visual Search Test

    ERIC Educational Resources Information Center

    Reid, Denise; Babani, Harsha; Jon, Eugenia

    2009-01-01

    Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed…

  20. Learning Related Visual Problems. ERIC Fact Sheet.

    ERIC Educational Resources Information Center

    ERIC Clearinghouse on Handicapped and Gifted Children, Reston, VA.

    This fact sheet defines vision, outlines the visual skills needed for school achievement (ocular motility, binocularity, eye-hand coordination skills, and visual form perception), and describes how visual problems are evaluated and treated. The fact sheet also lists clues to look for when a visual problem is suspected, including the appearance of…

  1. Using Visuals in Agricultural Extension Programs.

    ERIC Educational Resources Information Center

    International Cooperation Administration (Dept. of State), Washington, DC.

    One of a series of booklets designed to answer questions about agricultural communications is presented. This booklet illustrates how visual teaching speeds learning and effects faster agricultural progress. Chapter titles include: (1) Visuals and Learning, (2) Visuals in Extension Teaching; (3) Presentation Visuals; (4) Drama and Music; (5)…

  2. Fifteen Reasons To Study Visual Literacy.

    ERIC Educational Resources Information Center

    Sutton, Ronald E.

    This paper is a report on a decade of teaching visual literacy at the American University (Washington, D.C.). Visual literacy is defined as an awareness that comes with appropriate development of basic visual and aural competencies. The 15 reasons for studying visual literacy are perception, drawing, expression, brain awareness, design elements,…

  3. Laser Optometric Assessment Of Visual Display Viewability

    NASA Astrophysics Data System (ADS)

    Murch, Gerald M.

    1983-08-01

    Through the technique of laser optometry, measurements of a display user's visual accommodation and binocular convergence were used to assess the visual impact of display color, technology, contrast, and work time. The studies reported here indicate the potential of visual-function measurements as an objective means of improving the design of visual displays.

  4. Visualizing quantitative microscopy data: History and challenges.

    PubMed

    Sailem, Heba Z; Cooper, Sam; Bakal, Chris

    2016-01-01

    Data visualization is a fundamental aspect of science. In the context of microscopy-based studies, visualization typically involves presentation of the images themselves. However, data visualization is challenging when microscopy experiments entail imaging of millions of cells, and complex cellular phenotypes are quantified in a high-content manner. Most well-established visualization tools are inappropriate for displaying high-content data, which has driven the development of new visualization methodology. In this review, we discuss how data has been visualized in both classical and high-content microscopy studies; as well as the advantages, and disadvantages, of different visualization methods.

  5. [Responses of squirrel visual cortex neurons to patterned visual stimuli].

    PubMed

    Supin, A Ia

    1975-01-01

    The responses of visual cortical neurons to patterned visual stimuli were studied in squirrel Sciurus vulgaris. The direction selective, orientation-selective and non-selective neurons were observed. Most direction-selective and non-selective neurons were sensitive to high speeds of stimulus movement--hundreds deg/s. The direction-selective neurons exhibited their selectivity at such high speeds in spite of the short time of the stimulus movement through the receptive field. Orientation-selective neurons (with simple or complex receptive fields) were sensitive to lower speeds of the stimulus movement (tens deg/s). Some mechanisms of the properties described are discussed.

  6. Postdictive Modulation of Visual Orientation

    PubMed Central

    Kawabe, Takahiro

    2012-01-01

    The present study investigated how visual orientation is modulated by subsequent orientation inputs. Observers were presented a near-vertical Gabor patch as a target, followed by a left- or right-tilted second Gabor patch as a distracter in the spatial vicinity of the target. The task of the observers was to judge whether the target was right- or left-tilted (Experiment 1) or whether the target was vertical or not (Supplementary experiment). The judgment was biased toward the orientation of the distracter (the postdictive modulation of visual orientation). The judgment bias peaked when the target and distracter were temporally separated by 100 ms, indicating a specific temporal mechanism for this phenomenon. However, when the visibility of the distracter was reduced via backward masking, the judgment bias disappeared. On the other hand, the low-visibility distracter could still cause a simultaneous orientation contrast, indicating that the distracter orientation is still processed in the visual system (Experiment 2). Our results suggest that the postdictive modulation of visual orientation stems from spatiotemporal integration of visual orientation on the basis of a slow feature matching process. PMID:22393421

  7. Cerebral correlates of visual awareness.

    PubMed

    Milner, A D

    1995-09-01

    While it may be a long time before we can specify the mechanisms through which a brain process achieves awareness, it may be possible to determine as a first step whether awareness is limited to the products of certain kinds of processing. In the domain of vision, for example, perceptual awareness might only be attainable in association with object-centred coding, configural representations of space, and other such forms of abstracted (non-retinocentric) coding. It appears that these forms of visual coding are anatomically restricted to telencephalic structures, and indeed it has been argued that they may be peculiar to, or at least visually dependent upon, the 'ventral stream' of visual areas with the cortex. It is suggested here that such a brain process would still not be able to enter visual awareness unless it was selectively amplified through neuronal gating of the kind that has been shown to be correlated with selective spatial attention. The present paper explores the extent to which this putative dual requirement for visual consciousness might form a basis for understanding the various phenomena of "covert vision" seen in patients suffering from hemianopia, apperceptive agnosia, and unilateral spatial neglect.

  8. Stereoscopic applications for design visualization

    NASA Astrophysics Data System (ADS)

    Gilson, Kevin J.

    2007-02-01

    Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.

  9. Reconfigurable visualization for HWIL simulation

    NASA Astrophysics Data System (ADS)

    Buford, James A., Jr.; Garcia, Tricia A.; Bowden, Mark H.

    1998-08-01

    The U.S. Army Aviation and Missile Command (AMCOM) Missile Research, Engineering, and Development Center (MRDEC) Advanced Simulation Center has recognized the need for re- configurable visualization in support of hardware-in-the- loop (HWIL) simulations. AMCOM MRDEC made the development of re-configurable visualization tools a priority. SimSight, developed at AMCOM MRDEC, is designed to provide 3D visualization to HWIL simulations and after action reviews. Leveraging both the latest hardware and software visual simulation technologies, SimSight displays a concise, 3D view of the simulated world providing the HWIL engineer with unprecedented power to analyze quickly the progress of a simulation from pre-launch to impact. Providing 3D visualization is only half the solution; data management, distribution, and analysis is the companion problem being dealt with by AMCOM MRDEC with the development of Fulcrum, a cross-platform data capture, distribution, analysis, and display framework of which SimSight will become a component.

  10. Rethinking map legends with visualization.

    PubMed

    Dykes, Jason; Wood, Jo; Slingsby, Aidan

    2010-01-01

    This design paper presents new guidance for creating map legends in a dynamic environment. Our contribution is a set of guidelines for legend design in a visualization context and a series of illustrative themes through which they may be expressed. These are demonstrated in an applications context through interactive software prototypes. The guidelines are derived from cartographic literature and in liaison with EDINA who provide digital mapping services for UK tertiary education. They enhance approaches to legend design that have evolved for static media with visualization by considering: selection, layout, symbols, position, dynamism and design and process. Broad visualization legend themes include: The Ground Truth Legend, The Legend as Statistical Graphic and The Map is the Legend. Together, these concepts enable us to augment legends with dynamic properties that address specific needs, rethink their nature and role and contribute to a wider re-evaluation of maps as artifacts of usage rather than statements of fact. EDINA has acquired funding to enhance their clients with visualization legends that use these concepts as a consequence of this work. The guidance applies to the design of a wide range of legends and keys used in cartography and information visualization.

  11. Interobject grouping facilitates visual awareness.

    PubMed

    Stein, Timo; Kaiser, Daniel; Peelen, Marius V

    2015-01-01

    In organizing perception, the human visual system takes advantage of regularities in the visual input to perceptually group related image elements. Simple stimuli that can be perceptually grouped based on physical regularities, for example by forming an illusory contour, have a competitive advantage in entering visual awareness. Here, we show that regularities that arise from the relative positioning of complex, meaningful objects in the visual environment also modulate visual awareness. Using continuous flash suppression, we found that pairs of objects that were positioned according to real-world spatial regularities (e.g., a lamp above a table) accessed awareness more quickly than the same object pairs shown in irregular configurations (e.g., a table above a lamp). This advantage was specific to upright stimuli and abolished by stimulus inversion, meaning that it did not reflect physical stimulus confounds or the grouping of simple image elements. Thus, knowledge of the spatial configuration of objects in the environment shapes the contents of conscious perception.

  12. Visual search in virtual environments

    NASA Astrophysics Data System (ADS)

    Stark, Lawrence W.; Ezumi, Koji; Nguyen, Tho; Paul, R.; Tharp, Gregory K.; Yamashita, H. I.

    1992-08-01

    A key task in virtual environments is visual search. To obtain quantitative measures of human performance and documentation of visual search strategies, we have used three experimental arrangements--eye, head, and mouse control of viewing windows--by exploiting various combinations of helmet-mounted-displays, graphics workstations, and eye movement tracking facilities. We contrast two different categories of viewing strategies: one, for 2D pictures with large numbers of targets and clutter scattered randomly; the other for quasi-natural 3D scenes with targets and non-targets placed in realistic, sensible positions. Different searching behaviors emerge from these contrasting search conditions, reflecting different visual and perceptual modes. A regular 'searchpattern' is a systematic, repetitive, idiosyncratic sequence of movements carrying the eye to cover the entire 2D scene. Irregular 'searchpatterns' take advantages of wide windows and the wide human visual lobe; here, hierarchical detection and recognition is performed with the appropriate capabilities of the 'two visual systems'. The 'searchpath', also efficient, repetitive and idiosyncratic, provides only a small set of fixations to check continually the smaller number of targets in the naturalistic 3D scene; likely, searchpaths are driven by top-down spatial models. If the viewed object is known and able to be named, then an hypothesized, top-down cognitive model drives active looking in the 'scanpath' mode, again continually checking important subfeatures of the object. Spatial models for searchpaths may be primitive predecessors, in the evolutionary history of animals, of cognitive models for scanpaths.

  13. Interactive visualization of vegetation dynamics

    USGS Publications Warehouse

    Reed, B.C.; Swets, D.; Bard, L.; Brown, J.; Rowland, J.

    2001-01-01

    Satellite imagery provides a mechanism for observing seasonal dynamics of the landscape that have implications for near real-time monitoring of agriculture, forest, and range resources. This study illustrates a technique for visualizing timely information on key events during the growing season (e.g., onset, peak, duration, and end of growing season), as well as the status of the current growing season with respect to the recent historical average. Using time-series analysis of normalized difference vegetation index (NDVI) data from the advanced very high resolution radiometer (AVHRR) satellite sensor, seasonal dynamics can be derived. We have developed a set of Java-based visualization and analysis tools to make comparisons between the seasonal dynamics of the current year with those from the past twelve years. In addition, the visualization tools allow the user to query underlying databases such as land cover or administrative boundaries to analyze the seasonal dynamics of areas of their own interest. The Java-based tools (data exploration and visualization analysis or DEVA) use a Web-based client-server model for processing the data. The resulting visualization and analysis, available via the Internet, is of value to those responsible for land management decisions, resource allocation, and at-risk population targeting.

  14. Feature-Based Memory-Driven Attentional Capture: Visual Working Memory Content Affects Visual Attention

    ERIC Educational Resources Information Center

    Olivers, Christian N. L.; Meijer, Frank; Theeuwes, Jan

    2006-01-01

    In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by…

  15. VAP-CAP: A Procedure to Assess the Visual Functioning of Young Visually Impaired Children.

    ERIC Educational Resources Information Center

    Blanksby, D. C.; Langford, P. E.

    1993-01-01

    This article describes a visual assessment procedure (VAP) which evaluates capacity, attention, and processing (CAP) of infants and preschool children with visual impairments. The two-level battery considers, first, visual capacity and basic visual attention and, second, visual perceptual and cognitive abilities. A theoretical analysis of the…

  16. Visualizing Parallel Computer System Performance

    NASA Technical Reports Server (NTRS)

    Malony, Allen D.; Reed, Daniel A.

    1988-01-01

    Parallel computer systems are among the most complex of man's creations, making satisfactory performance characterization difficult. Despite this complexity, there are strong, indeed, almost irresistible, incentives to quantify parallel system performance using a single metric. The fallacy lies in succumbing to such temptations. A complete performance characterization requires not only an analysis of the system's constituent levels, it also requires both static and dynamic characterizations. Static or average behavior analysis may mask transients that dramatically alter system performance. Although the human visual system is remarkedly adept at interpreting and identifying anomalies in false color data, the importance of dynamic, visual scientific data presentation has only recently been recognized Large, complex parallel system pose equally vexing performance interpretation problems. Data from hardware and software performance monitors must be presented in ways that emphasize important events while eluding irrelevant details. Design approaches and tools for performance visualization are the subject of this paper.

  17. Visualization Software for Molecular Assemblies

    PubMed Central

    Goddard, Thomas D; Ferrin, Thomas E

    2007-01-01

    Summary Software for viewing three-dimensional models and maps of viruses, ribosomes, filaments and other molecular assemblies is advancing on many fronts. New developments include molecular representations that offer better control over level of detail, lighting that improves the perception of depth, and two-dimensional projections that simplify data interpretation. Programmable graphics processors offer quality, speed and visual effects not previously possible, while 3D printers, haptic interaction devices, and auto-stereo displays show promise in more naturally engaging our senses. Visualization methods are developed by diverse groups of researchers with differing goals: experimental biologists, database developers, computer scientists, and package developers. We survey recent developments and problems faced by the developer community in bringing innovative visualization methods into widespread use. PMID:17728125

  18. Discovering Knowledge Through Visual Analysis

    SciTech Connect

    Thomas, James J.; Cowley, Paula J.; Kuchar, Olga A.; Nowell, Lucy T.; Thomson, Judi R.; Wong, Pak C.

    2001-07-12

    This paper describes a vision for the near future in digital content analysis as it relates to the creation, verification, and presentation of knowledge. This paper focuses on how visualization enables humans to gain knowledge. Visualization, in this context, is not just the picture representing the data, but a two-way interaction between the human and their information resources for the purposes of knowledge discovery, verification, and the sharing of knowledge with others. We present PNNL-developed software to demonstrate how current technology can use visualization tools to analyze large diverse collections of text. This will be followed by lessons learned and the presentation of a core concept for a new human information discourse.

  19. Visual Analytics for MOOC Data.

    PubMed

    Qu, Huamin; Chen, Qing

    2015-01-01

    With the rise of massive open online courses (MOOCs), tens of millions of learners can now enroll in more than 1,000 courses via MOOC platforms such as Coursera and edX. As a result, a huge amount of data has been collected. Compared with traditional education records, the data from MOOCs has much finer granularity and also contains new pieces of information. It is the first time in history that such comprehensive data related to learning behavior has become available for analysis. What roles can visual analytics play in this MOOC movement? The authors survey the current practice and argue that MOOCs provide an opportunity for visualization researchers and that visual analytics systems for MOOCs can benefit a range of end users such as course instructors, education researchers, students, university administrators, and MOOC providers.

  20. Flow, affect and visual creativity.

    PubMed

    Cseh, Genevieve M; Phillips, Louise H; Pearson, David G

    2015-01-01

    Flow (being in the zone) is purported to have positive consequences in terms of affect and performance; however, there is no empirical evidence about these links in visual creativity. Positive affect often--but inconsistently--facilitates creativity, and both may be linked to experiencing flow. This study aimed to determine relationships between these variables within visual creativity. Participants performed the creative mental synthesis task to simulate the creative process. Affect change (pre- vs. post-task) and flow were measured via questionnaires. The creativity of synthesis drawings was rated objectively and subjectively by judges. Findings empirically demonstrate that flow is related to affect improvement during visual creativity. Affect change was linked to productivity and self-rated creativity, but no other objective or subjective performance measures. Flow was unrelated to all external performance measures but was highly correlated with self-rated creativity; flow may therefore motivate perseverance towards eventual excellence rather than provide direct cognitive enhancement.

  1. A Presentation of Spectracular Visualizations

    NASA Technical Reports Server (NTRS)

    Hasler, Fritz; Pierce, Hal; Einaudi, Franco (Technical Monitor)

    2000-01-01

    The NASA/NOAA/AMS Earth Science Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to Florida and the KSC Visitor's Center. Go back to the early weather satellite images from the 1960s see them contrasted with the latest International global satellite weather movies including killer hurricanes & tornadic thunderstorms. See the latest spectacular images from NASA and NOAA remote sensing missions like GOES, NOAA, TRMM, SeaWiFS, Landsat7, & new Terra which will be visualized with state-of-the art tools. Shown in High Definition TV resolution (2048 x 768 pixels) are visualizations of hurricanes Lenny, Floyd, Georges, Mitch, Fran and Linda. See visualizations featured on covers of magazines like Newsweek, TIME, National Geographic, Popular Science and on National & International Network TV. New Digital Earth visualization tools allow us to roam & zoom through massive global images including a Landsat tour of the US, with drill-downs into major cities using I m resolution spy-satellite technology from the Space Imaging IKONOS satellite. Spectacular new visualizations of the global atmosphere & oceans are shown. See massive dust storms sweeping across Africa. See ocean vortexes and currents that bring up the nutrients to feed tiny plankton and draw the fish, giant whales and fisherman. See the how the ocean blooms in response to these currents and El Nino/La Nina climate changes. The demonstration is interactively driven by a SGI Octane Graphics Supercomputer with dual CPUs, 5 Gigabytes of RAM and Terabyte disk using two projectors across the super sized Universe Theater panoramic screen.

  2. A Presentation of Spectacular Visualizations

    NASA Technical Reports Server (NTRS)

    Hasler, Fritz; Einaudi, Franco (Technical Monitor)

    2000-01-01

    The NASA/NOAA/AMS Earth Science Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to Florida and the KSC Visitor's Center. Go back to the early weather satellite images from the 1960s see them contrasted with the latest International global satellite weather movies including killer hurricanes and tornadic thunderstorms. See the latest spectacular images from NASA and the National Oceanic and Atmospheric Administration (NOAA) remote sensing missions like the Geostationary Operational Environmental Satellites (GOES), NOAA, Tropical Rainfall Measuring Mission (TRMM), SeaWiFS, Landsat7, and new Terra which will be visualized with state-of-the art tools. Shown in High Definition TV resolution (2048 x 768 pixels) are visualizations of hurricanes Lenny, Floyd, Georges, Mitch, Fran, and Linda. See visualizations featured on covers of magazines like Newsweek, TIME, National Geographic, Popular Science, and on National and International Network TV. New Digital Earth visualization tools allow us to roam and zoom through massive global images including a Landsat tour of the US, with drill-downs into major cities using one meter resolution spy-satellite technology from the Space Imaging IKONOS satellite. Spectacular new visualizations of the global atmosphere and oceans are shown. See massive dust storms sweeping across Africa. See ocean vortexes and currents that bring up the nutrients to feed tiny plankton and draw the fish, giant whales and fisherman. See the how the ocean blooms in response to these currents and El Nino/La Nina climate changes. The demonstration is interactively driven by a SGI Octane Graphics Supercomputer with dual CPUs, 5 Gigabytes of RAM and Terabyte disk using two projectors across the super sized Universe Theater panoramic screen.

  3. Helping the Visually Impaired Student with Electronic Video Visual Aids.

    ERIC Educational Resources Information Center

    Visualtek, Inc., Santa Monica, CA.

    THE FOLLOWING IS THE FULL TEXT OF THIS DOCUMENT: Video visual aids are Closed Circuit TV systems (CCTV's) which magnify print and enlarge it electronically upon a screen so partially sighted persons with some residual vision can read and write normal size print. These devices are in use around the world in homes, schools, industries and libraries,…

  4. Visual Communication: Integrating Visual Instruction into Business Communication Courses

    ERIC Educational Resources Information Center

    Baker, William H.

    2006-01-01

    Business communication courses are ideal for teaching visual communication principles and techniques. Many assignments lend themselves to graphic enrichment, such as flyers, handouts, slide shows, Web sites, and newsletters. Microsoft Publisher and Microsoft PowerPoint are excellent tools for these assignments, with Publisher being best for…

  5. Visual Function and Visual Acuity in an Urban Adult Population.

    ERIC Educational Resources Information Center

    Katz, J.; Tielsch, J. M.

    1996-01-01

    A survey of 6,850 adults age 40 or over concerning the effects of poor vision found that difficulties with reading or other near-vision activities were the most common complaint. One-fourth reported limitations in activities because of poor vision. Factors associated with loss of visual function were general health status, educational level, and…

  6. Course File for "Documentary Film, Visual Anthropology, and Visual Sociology."

    ERIC Educational Resources Information Center

    Tomaselli, Keyan G.; Shepperson, Arnold

    1997-01-01

    Describes a course that addresses and relocates questions of representation and reconstruction in the context of reflexive explanations for ethnographic films. Examines questions of power/power relations; anthropological and media construction of the Other; and trends in visual anthropology. Contextualizes the course within the human experience of…

  7. Making Information Visual: Seventh Grade Art Information and Visual Literacy

    ERIC Educational Resources Information Center

    Shoemaker, Joel; Schau, Elizabeth; Ayers, Rachael

    2008-01-01

    Seventh grade students entering South East Junior High in Iowa City come from eight elementary feeder schools, as well as from schools around the world. Their information literacy skills and knowledge of reference sources vary, but since all seventh graders and new eighth graders are required to take one trimester of Visual Studies, all entering…

  8. Thinking Visually: Using Visual Media in the College Classroom

    ERIC Educational Resources Information Center

    Tobolowsky, Barbara F.

    2007-01-01

    Getting through to students in the classroom continues to be one of the great mysteries of an educator's life. What will capture their attention, and, more important, what will transform their thinking? Film industry veteran and educator Barbara Tobolowsky returned to her roots in visual media to find answers. Using video clips to introduce a…

  9. Similarity relations in visual search predict rapid visual categorization

    PubMed Central

    Mohan, Krithika; Arun, S. P.

    2012-01-01

    How do we perform rapid visual categorization?It is widely thought that categorization involves evaluating the similarity of an object to other category items, but the underlying features and similarity relations remain unknown. Here, we hypothesized that categorization performance is based on perceived similarity relations between items within and outside the category. To this end, we measured the categorization performance of human subjects on three diverse visual categories (animals, vehicles, and tools) and across three hierarchical levels (superordinate, basic, and subordinate levels among animals). For the same subjects, we measured their perceived pair-wise similarities between objects using a visual search task. Regardless of category and hierarchical level, we found that the time taken to categorize an object could be predicted using its similarity to members within and outside its category. We were able to account for several classic categorization phenomena, such as (a) the longer times required to reject category membership; (b) the longer times to categorize atypical objects; and (c) differences in performance across tasks and across hierarchical levels. These categorization times were also accounted for by a model that extracts coarse structure from an image. The striking agreement observed between categorization and visual search suggests that these two disparate tasks depend on a shared coarse object representation. PMID:23092947

  10. AWE: Aviation Weather Data Visualization

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Lodha, Suresh K.

    2001-01-01

    The two official sources for aviation weather reports both require the pilot to mentally visualize the provided information. In contrast, our system, Aviation Weather Environment (AWE) presents aviation specific weather available to pilots in an easy to visualize form. We start with a computer-generated textual briefing for a specific area. We map this briefing onto a grid specific to the pilot's route that includes only information relevant to his flight route that includes only information relevant to his flight as defined by route, altitude, true airspeed, and proposed departure time. By modifying various parameters, the pilot can use AWE as a planning tool as well as a weather briefing tool.

  11. Visually Guided Control of Movement

    NASA Technical Reports Server (NTRS)

    Johnson, Walter W. (Editor); Kaiser, Mary K. (Editor)

    1991-01-01

    The papers given at an intensive, three-week workshop on visually guided control of movement are presented. The participants were researchers from academia, industry, and government, with backgrounds in visual perception, control theory, and rotorcraft operations. The papers included invited lectures and preliminary reports of research initiated during the workshop. Three major topics are addressed: extraction of environmental structure from motion; perception and control of self motion; and spatial orientation. Each topic is considered from both theoretical and applied perspectives. Implications for control and display are suggested.

  12. Procedures for precap visual inspection

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Screening procedures for the final precap visual inspection of microcircuits used in electronic system components are described as an aid in training personnel unfamiliar with microcircuits. Processing techniques used in industry for the manufacture of monolithic and hybrid components are presented and imperfections that may be encountered during this inspection are discussed. Problem areas such as scratches, voids, adhesions, and wire bonding are illustrated by photomicrographs. This guide can serve as an effective tool in training personnel to perform precap visual inspections efficiently and reliably.

  13. Mobile medical visual information retrieval.

    PubMed

    Depeursinge, Adrien; Duc, Samuel; Eggel, Ivan; Müller, Henning

    2012-01-01

    In this paper, we propose mobile access to peer-reviewed medical information based on textual search and content-based visual image retrieval. Web-based interfaces designed for limited screen space were developed to query via web services a medical information retrieval engine optimizing the amount of data to be transferred in wireless form. Visual and textual retrieval engines with state-of-the-art performance were integrated. Results obtained show a good usability of the software. Future use in clinical environments has the potential of increasing quality of patient care through bedside access to the medical literature in context.

  14. Astronomy Data Visualization with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2015-08-01

    We present innovative methods and techniques for using Blender, a 3D software package, in the visualization of astronomical data. N-body simulations, data cubes, galaxy and stellar catalogs, and planetary surface maps can be rendered in high quality videos for exploratory data analysis. Blender's API is Python based, making it advantageous for use in astronomy with flexible libraries like astroPy. Examples will be exhibited that showcase the features of the software in astronomical visualization paradigms. 2D and 3D voxel texture applications, animations, camera movement, and composite renders are introduced to the astronomer's toolkit and how they mesh with different forms of data.

  15. Acoustic visualizations using surface mapping.

    PubMed

    Siltanen, Samuel; Robinson, Philip W; Saarelma, Jukka; Pätynen, Jukka; Tervo, Sakari; Savioja, Lauri; Lokki, Tapio

    2014-06-01

    Sound visualizations have been an integral part of room acoustics studies for more than a century. As acoustic measurement techniques and knowledge of hearing evolve, acousticians need more intuitive ways to represent increasingly complex data. Microphone array processing now allows accurate measurement of spatio-temporal acoustic properties. However, the multidimensional data can be a challenge to display coherently. This letter details a method of mapping visual representations of acoustic reflections from a receiver position to the surfaces from which the reflections originated. The resulting animations are presented as a spatial acoustic analysis tool.

  16. Visual pattern degradation based image quality assessment

    NASA Astrophysics Data System (ADS)

    Wu, Jinjian; Li, Leida; Shi, Guangming; Lin, Weisi; Wan, Wenfei

    2015-08-01

    In this paper, we introduce a visual pattern degradation based full-reference (FR) image quality assessment (IQA) method. Researches on visual recognition indicate that the human visual system (HVS) is highly adaptive to extract visual structures for scene understanding. Existing structure degradation based IQA methods mainly take local luminance contrast to represent structure, and measure quality as degradation on luminance contrast. In this paper, we suggest that structure includes not only luminance contrast but also orientation information. Therefore, we analyze the orientation characteristic for structure description. Inspired by the orientation selectivity mechanism in the primary visual cortex, we introduce a novel visual pattern to represent the structure of a local region. Then, the quality is measured as the degradations on both luminance contrast and visual pattern. Experimental results on Five benchmark databases demonstrate that the proposed visual pattern can effectively represent visual structure and the proposed IQA method performs better than the existing IQA metrics.

  17. Auditory Motion Elicits a Visual Motion Aftereffect

    PubMed Central

    Berger, Christopher C.; Ehrsson, H. Henrik

    2016-01-01

    The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect—an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates. PMID:27994538

  18. Patient DF's visual brain in action: Visual feedforward control in visual form agnosia.

    PubMed

    Whitwell, Robert L; Milner, A David; Cavina-Pratesi, Cristiana; Barat, Masihullah; Goodale, Melvyn A

    2015-05-01

    Patient DF, who developed visual form agnosia following ventral-stream damage, is unable to discriminate the width of objects, performing at chance, for example, when asked to open her thumb and forefinger a matching amount. Remarkably, however, DF adjusts her hand aperture to accommodate the width of objects when reaching out to pick them up (grip scaling). While this spared ability to grasp objects is presumed to be mediated by visuomotor modules in her relatively intact dorsal stream, it is possible that it may rely abnormally on online visual or haptic feedback. We report here that DF's grip scaling remained intact when her vision was completely suppressed during grasp movements, and it still dissociated sharply from her poor perceptual estimates of target size. We then tested whether providing trial-by-trial haptic feedback after making such perceptual estimates might improve DF's performance, but found that they remained significantly impaired. In a final experiment, we re-examined whether DF's grip scaling depends on receiving veridical haptic feedback during grasping. In one condition, the haptic feedback was identical to the visual targets. In a second condition, the haptic feedback was of a constant intermediate width while the visual target varied trial by trial. Despite this incongruent feedback, DF still scaled her grip aperture to the visual widths of the target blocks, showing only normal adaptation to the false haptically-experienced width. Taken together, these results strengthen the view that DF's spared grasping relies on a normal mode of dorsal-stream functioning, based chiefly on visual feedforward processing.

  19. Visualization of Longitudinal Student Data

    ERIC Educational Resources Information Center

    Bendinelli, Anthony J.; Marder, M.

    2012-01-01

    We use visualization to find patterns in educational data. We represent student scores from high-stakes exams as flow vectors in fluids, define two types of streamlines and trajectories, and show that differences between streamlines and trajectories are due to regression to the mean. This issue is significant because it determines how quickly…

  20. Collaborative interactive visualization: exploratory concept

    NASA Astrophysics Data System (ADS)

    Mokhtari, Marielle; Lavigne, Valérie; Drolet, Frédéric

    2015-05-01

    Dealing with an ever increasing amount of data is a challenge that military intelligence analysts or team of analysts face day to day. Increased individual and collective comprehension goes through collaboration between people. Better is the collaboration, better will be the comprehension. Nowadays, various technologies support and enhance collaboration by allowing people to connect and collaborate in settings as varied as across mobile devices, over networked computers, display walls, tabletop surfaces, to name just a few. A powerful collaboration system includes traditional and multimodal visualization features to achieve effective human communication. Interactive visualization strengthens collaboration because this approach is conducive to incrementally building a mental assessment of the data meaning. The purpose of this paper is to present an overview of the envisioned collaboration architecture and the interactive visualization concepts underlying the Sensemaking Support System prototype developed to support analysts in the context of the Joint Intelligence Collection and Analysis Capability project at DRDC Valcartier. It presents the current version of the architecture, discusses future capabilities to help analyst(s) in the accomplishment of their tasks and finally recommends collaboration and visualization technologies allowing to go a step further both as individual and as a team.

  1. The neurology of visual acuity.

    PubMed

    Frisén, L

    1980-09-01

    A series of patients with well defined lesions of various parts of the visual pathways was studied in an attempt to iluminate the neuropathophysiology of visual acuity. Acuity was found to remain normal in all cases with unilateral retrochiasmal lesions, including those of the optic tract. Bilateral retrochiasmal lesions involving the foveal nerve fibres on both sides impaired acuity to the same degree in both eyes. Lateral chiasmal lesions regularly produced impaired acuity in the ipsilateral eye. Midchiasmal lesions commonly led to an impairment of visual acuity in both eyes, usually asymmetrically, and roughly proportionate to the severity of the visual field defect. Compression optic neuropathy was found to reduce acuity in rough proportion to the severity of compression. It was concluded that acuity remains normal as long as either the crossing or the non-crossing neural outflow from the retinal fovea remains intact: acuity fails only when both sets of nerve fibres are compromised. A properly executed acuity test seems to be a powerful tool for detecting such conditions. The lower limit of normal acuity should never be set below 1.0 or 20/20: even this level is clearly subnormal in many subjects.

  2. Visual Acuity and the Eye.

    ERIC Educational Resources Information Center

    Beynon, J.

    1985-01-01

    Shows that visual acuity is a function of the structure of the eye and that its limit is set by the structure of the retina, emphasizing the role of lens aberrations and difraction on image quality. Also compares human vision with that of other vertebrates and insects. (JN)

  3. VCAT: Visual Crosswalk Analysis Tool

    SciTech Connect

    Cleland, Timothy J.; Forslund, David W.; Cleland, Catherine A.

    2012-08-31

    VCAT is a knowledge modeling and analysis tool. It was synthesized from ideas in functional analysis, business process modeling, and complex network science. VCAT discovers synergies by analyzing natural language descriptions. Specifically, it creates visual analytic perspectives that capture intended organization structures, then overlays the serendipitous relationships that point to potential synergies within an organization or across multiple organizations.

  4. Visual Performing Arts. Program Review.

    ERIC Educational Resources Information Center

    State Univ. System of Florida, Tallahassee. Board of Regents.

    This is the third review of higher education visual and performing arts programs in the state of Florida. The report is based on descriptive and self-evaluative reports and videotapes provided by each of the nine universities in the state system (the University of Florida, Florida State University, Florida A & M University, University of South…

  5. Visual Modelling of Learning Processes

    ERIC Educational Resources Information Center

    Copperman, Elana; Beeri, Catriel; Ben-Zvi, Nava

    2007-01-01

    This paper introduces various visual models for the analysis and description of learning processes. The models analyse learning on two levels: the dynamic level (as a process over time) and the functional level. Two types of model for dynamic modelling are proposed: the session trace, which documents a specific learner in a particular learning…

  6. Visual Design in the Planetarium.

    ERIC Educational Resources Information Center

    Hunt, Jeffrey L.

    Learning research in visuals and media from education is reviewed, summarized, and extrapolated for the planetarium. Of particular interest is research into the use of cuing, with relevance to the planetarium display. It has been demonstrated that viewers can learn more if they are told about the important concepts and terms before the planetarium…

  7. Project Themis: Water Visualization Study

    DTIC Science & Technology

    2011-09-15

    parameters and design space. Apparatus is discussed, including water flow loop and test section parts, as well as flow measurements, LDV, PLIF, and...release; distribution unlimited Project Themis: Water Visualization Study Allen Bishop AFRL/RZSE 15 Sept 2011 2 About Me • BS & MS Aerospace

  8. Volume visualization and knowledge integration

    NASA Technical Reports Server (NTRS)

    Cochrane, Thomas

    1991-01-01

    A discussion is presented of how advanced computer graphics visualization techniques and integration of knowledge throughout the concept to production cycle are providing tools to revolutionize the design process. Relevant projects that are used as examples are at USAF, Ford, Digital, EPA, and DOE.

  9. Visualizing and Writing Video Programs.

    ERIC Educational Resources Information Center

    Floyd, Steve

    1979-01-01

    Reviews 10 steps which serve as guidelines to simplify the creative process of producing a video training program: (1) audience analysis, (2) task analysis, (3) definition of objective, (4) conceptualization, (5) visualization, (6) storyboard, (7) video storyboard, (8) evaluation, (9) revision, and (10) production. (LRA)

  10. Diversity, Pedagogy, and Visual Culture

    ERIC Educational Resources Information Center

    Amburgy, Patricia M.

    2011-01-01

    As new approaches have emerged in art education, teacher preparation programs in higher education have revised existing courses or created new ones that reflect those new approaches. At the university where the author teaches, one such course is Diversity, Pedagogy, and Visual Culture (A ED 225). A ED 225 is intended to offer preservice art…

  11. Visualizing Elections Using Saari Triangles

    ERIC Educational Resources Information Center

    Alfaro, Ricardo; Han, Lixing; Schilling, Kenneth; Birgen, Mariah

    2010-01-01

    Students sometimes have difficulty calculating the result of a voting system applied to a particular set of voter preference lists. Saari triangles offer a way to visualize the result of an election and make this calculation easier in the case of several important voting systems.

  12. Visualizing Dynamic Bitcoin Transaction Patterns.

    PubMed

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J

    2016-06-01

    This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network.

  13. Visual Skills: Watch the Ball?

    ERIC Educational Resources Information Center

    Moen, Sue

    1989-01-01

    In tennis as well as in other racket/paddle sports, simply watching the ball does not guarantee success in hitting the ball to the desired location. Teachers and coaches should teach players to integrate available visual, spatial, and kinesthetic information. Several drills for good ball contact are outlined. (IAH)

  14. Visual Manipulatives for Proportional Reasoning.

    ERIC Educational Resources Information Center

    Moore, Joyce L.; Schwartz, Daniel L.

    The use of a visual representation in learning about proportional relations was studied, examining students' understandings of the invariance of a multiplicative relation on both sides of a proportion equation and the invariance of the structural relations that exist in different semantic types of proportion problems. Subjects were 49 high-ability…

  15. Visual thinking in organizational analysis

    NASA Astrophysics Data System (ADS)

    Grantham, Charles E.

    1991-06-01

    The ability to visualize the relationship among elements of large complex databases is a trend which is yielding new insights into several fields. The author demonstrates the use of 'visual thinking' as an analytical tool to the analysis of formal, complex organizations. Recent developments in organizational design and office automation are making the visual analysis of workflows possible. An analytical mental model of organizational functioning can be built upon a depiction of information flows among work group members. The dynamics of organizational functioning can be described in terms of six essential processes. Furthermore, each of these sub-systems develop within a staged cycle referred to as an enneagram model. Together these mental models present a visual metaphor of healthy function in large formal organizations; both in static and dynamic terms. These models can be used to depict the 'state' of an organization at points in time by linking each process to quantitative data taken from the monitoring of the flow of information in computer networks.

  16. Visual Detection of Land Mines

    DTIC Science & Technology

    2007-04-01

    5 3.2 Detection of Clutter and Ghost Targets...simulant detectability (by mine)............................................................................. 6 Table 4. Number of clutter and ghost ...clutter1. Some targets were removed from the ground at both locations (later referred to as ghost targets), thus leaving visual evidence of the action

  17. Algorithm Visualization in Teaching Practice

    ERIC Educational Resources Information Center

    Törley, Gábor

    2014-01-01

    This paper presents the history of algorithm visualization (AV), highlighting teaching-methodology aspects. A combined, two-group pedagogical experiment will be presented as well, which measured the efficiency and the impact on the abstract thinking of AV. According to the results, students, who learned with AV, performed better in the experiment.

  18. Oceanography for the Visually Impaired

    ERIC Educational Resources Information Center

    Fraser, Kate

    2008-01-01

    Amy Bower is a physical oceanographer and senior scientist at the Woods Hole Oceanographic Institution (WHOI) in Woods Hole, Massachusetts--she has also been legally blind for 14 years. Through her partnership with the Perkins School for the Blind in Watertown, Massachusetts, the oldest K-12 school for the visually impaired in the United States,…

  19. Instructional Technology and Molecular Visualization

    ERIC Educational Resources Information Center

    Appling, Jeffrey R.; Peake, Lisa C.

    2004-01-01

    The effect of intervening use of molecular visualization software was tested on 73 first-year general chemistry students. Pretests and posttests included both traditional multiple-choice questions and model-building activities. Overall students improved after working with the software, although students performed less well on the model-building…

  20. Visualizing Chemistry with Infrared Imaging

    ERIC Educational Resources Information Center

    Xie, Charles

    2011-01-01

    Almost all chemical processes release or absorb heat. The heat flow in a chemical system reflects the process it is undergoing. By showing the temperature distribution dynamically, infrared (IR) imaging provides a salient visualization of the process. This paper presents a set of simple experiments based on IR imaging to demonstrate its enormous…

  1. Teaching Rhetoric through Data Visualization

    ERIC Educational Resources Information Center

    Butler, Shannan H.

    2011-01-01

    The ability to understand a speaker's or author's worldview better, whether an openly espoused ideology or one veiled and deeply hidden, should help students hone their critical thinking skills. This article describes an activity which attempts to do just that by applying new data visualization methods to a rhetorical artifact and examining the…

  2. Developing and Assessing Visual Thinking.

    ERIC Educational Resources Information Center

    Martinello, Marian L.; Mammen, Loretta

    A three credit undergraduate course, offered through a University of Texas Gifted and Talented Program to qualified high school students, used material objects in a museum to promote visual thinking. Fourteen students were enrolled in the course, which included four interacting units of study: (1) using the operations and strategies of visual…

  3. Data sonification and sound visualization.

    SciTech Connect

    Kaper, H. G.; Tipei, S.; Wiebel, E.; Mathematics and Computer Science; Univ. of Illinois

    1999-07-01

    Sound can help us explore and analyze complex data sets in scientific computing. The authors describe a digital instrument for additive sound synthesis (Diass) and a program to visualize sounds in a virtual reality environment (M4Cave). Both are part of a comprehensive music composition environment that includes additional software for computer-assisted composition and automatic music notation.

  4. Boundaries in Visualizing Mathematical Behaviour

    ERIC Educational Resources Information Center

    Hare, Andrew Francis

    2013-01-01

    It is surprising to students to learn that a natural combination of simple functions, the function sin(1/x), exhibits behaviour that is a great challenge to visualize. When x is large the function is relatively easy to draw; as x gets smaller the function begins to behave in an increasingly wild manner. The sin(1/x) function can serve as one of…

  5. SOME VISUAL ASPECTS OF PRONUNCIATION.

    ERIC Educational Resources Information Center

    REICHMANN, EBERHARD

    MUCH OF THE DEFICIENCY IN PRONUNCIATION INSTRUCTION IS DUE TO EXCLUSIVE RELIANCE ON THE EAR AS A GUIDE FOR THE CORRECT RECOGNITION AND PRODUCTION OF FOREIGN SOUNDS. THE CORRECT ARTICULATION OF GERMAN DEPENDS VERY MUCH ON ANTICIPATORY ACTION OF THE LIPS AND JAWS (VOWEL ANTICIPATION), AND MUST BE IMPARTED BY VISUAL DEMONSTRATION AND IMITATION OF…

  6. Visual Productions and Student Learning.

    ERIC Educational Resources Information Center

    Bazeli, Marilyn

    When students become actively involved in technology productions they develop learning skills, communication skills, and visual analysis skills, all of which are applied to real-life learning within the classroom curriculum. Students participate in all stages of the production projects, which proves to be motivating for the students and allows the…

  7. Component-Based Visualization System

    NASA Technical Reports Server (NTRS)

    Delgado, Francisco

    2005-01-01

    A software system has been developed that gives engineers and operations personnel with no "formal" programming expertise, but who are familiar with the Microsoft Windows operating system, the ability to create visualization displays to monitor the health and performance of aircraft/spacecraft. This software system is currently supporting the X38 V201 spacecraft component/system testing and is intended to give users the ability to create, test, deploy, and certify their subsystem displays in a fraction of the time that it would take to do so using previous software and programming methods. Within the visualization system there are three major components: the developer, the deployer, and the widget set. The developer is a blank canvas with widget menu items that give users the ability to easily create displays. The deployer is an application that allows for the deployment of the displays created using the developer application. The deployer has additional functionality that the developer does not have, such as printing of displays, screen captures to files, windowing of displays, and also serves as the interface into the documentation archive and help system. The third major component is the widget set. The widgets are the visual representation of the items that will make up the display (i.e., meters, dials, buttons, numerical indicators, string indicators, and the like). This software was developed using Visual C++ and uses COTS (commercial off-the-shelf) software where possible.

  8. NASA Dryden flow visualization facility

    NASA Technical Reports Server (NTRS)

    Delfrate, John H.

    1995-01-01

    This report describes the Flow Visualization Facility at NASA Dryden Flight Research Center, Edwards, California. This water tunnel facility is used primarily for visualizing and analyzing vortical flows on aircraft models and other shapes at high-incidence angles. The tunnel is used extensively as a low-cost, diagnostic tool to help engineers understand complex flows over aircraft and other full-scale vehicles. The facility consists primarily of a closed-circuit water tunnel with a 16- x 24-in. vertical test section. Velocity of the flow through the test section can be varied from 0 to 10 in/sec; however, 3 in/sec provides optimum velocity for the majority of flow visualization applications. This velocity corresponds to a unit Reynolds number of 23,000/ft and a turbulence level over the majority of the test section below 0.5 percent. Flow visualization techniques described here include the dye tracer, laser light sheet, and shadowgraph. Limited correlation to full-scale flight data is shown.

  9. Information measures for terrain visualization

    NASA Astrophysics Data System (ADS)

    Bonaventura, Xavier; Sima, Aleksandra A.; Feixas, Miquel; Buckley, Simon J.; Sbert, Mateu; Howell, John A.

    2017-02-01

    Many quantitative and qualitative studies in geoscience research are based on digital elevation models (DEMs) and 3D surfaces to aid understanding of natural and anthropogenically-influenced topography. As well as their quantitative uses, the visual representation of DEMs can add valuable information for identifying and interpreting topographic features. However, choice of viewpoints and rendering styles may not always be intuitive, especially when terrain data are augmented with digital image texture. In this paper, an information-theoretic framework for object understanding is applied to terrain visualization and terrain view selection. From a visibility channel between a set of viewpoints and the component polygons of a 3D terrain model, we obtain three polygonal information measures. These measures are used to visualize the information associated with each polygon of the terrain model. In order to enhance the perception of the terrain's shape, we explore the effect of combining the calculated information measures with the supplementary digital image texture. From polygonal information, we also introduce a method to select a set of representative views of the terrain model. Finally, we evaluate the behaviour of the proposed techniques using example datasets. A publicly available framework for both the visualization and the view selection of a terrain has been created in order to provide the possibility to analyse any terrain model.

  10. Visual Literacy and School Libraries

    ERIC Educational Resources Information Center

    McPherson, Keith

    2004-01-01

    Not convinced that teachers and teacher-librarians were actively suppressing their students' visual literacy, the author decided to conduct informal interviews with four local teacher-librarians (three elementary and one secondary) attending classes at the university where he instructs. All four indicated that though their libraries were rich…

  11. Relating Attention to Visual Mechanisms

    DTIC Science & Technology

    1989-02-28

    context (Johannson, 1975). The point is that the extraction of any perceptual property requires a series of operations that are not separated in the...Calif:Goodyear. Movshon, J., Adelson, E., Gizzi, M. & Newsome, W. (1985) The analysis of moving visual patterns. In Pattern Recognition Mechanisms, C. Chagas , R

  12. Visual Disability and Horse Riding

    ERIC Educational Resources Information Center

    Brickell, Diana

    2005-01-01

    It is now commonplace for horse riding to be included in the extra-curricular activities of students with physical disabilities. In this article an account is given of how visually impaired people can derive physical, mental, and emotional benefits from this supervised activity. It is argued that the rider, in learning to exercise self-control and…

  13. A Visually Oriented Text Editor

    NASA Technical Reports Server (NTRS)

    Gomez, J. E.

    1985-01-01

    HERMAN employs Evans & Sutherland Picture System 2 to provide screenoriented editing capability for DEC PDP-11 series computer. Text altered by visual indication of characters changed. Group of HERMAN commands provides for higher level operations. HERMAN provides special features for editing FORTRAN source programs.

  14. Six Degrees of "Visual" Separation

    ERIC Educational Resources Information Center

    Wall, Sharon

    2006-01-01

    Whether referring to psychologist Stanley Milgram's intriguing theory, John Guare's successful play and film, or Kevin Bacon's party game, six degrees of separation may also be used as a way to help students make visual connections. The six degrees of separation is the concept that everyone is connected to everyone else in the world by only six…

  15. The Audio-Visual Man.

    ERIC Educational Resources Information Center

    Babin, Pierre, Ed.

    A series of twelve essays discuss the use of audiovisuals in religious education. The essays are divided into three sections: one which draws on the ideas of Marshall McLuhan and other educators to explore the newest ideas about audiovisual language and faith, one that describes how to learn and use the new language of audio and visual images, and…

  16. Visualizing Dynamic Bitcoin Transaction Patterns

    PubMed Central

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J.

    2016-01-01

    Abstract This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network. PMID:27441715

  17. Visual Literacy: An Institutional Imperative

    ERIC Educational Resources Information Center

    Metros, Susan E.; Woolsey, Kristina

    2006-01-01

    Academics have a long history of claiming and defending the superiority of verbal over visual for representing knowledge. By dismissing imagery as mere decoration, they have upheld the sanctity of print for academic discourse. However, in the last decade, digital technologies have broken down the barriers between words and pictures, and many of…

  18. Visual Thinking Patterns. [Teaching Guide.

    ERIC Educational Resources Information Center

    Crosbie, Helen

    Theories and techniques for fostering creativity are described because all students, regardless of intelligence or talent, have artistic ability that should be developed. Four basic visual viewpoints have been identified: the expressive colorist, the hands-on formist, the neat observant designer, and the pattern-oriented draftsperson. These visual…

  19. A Visual Information Retrieval Tool.

    ERIC Educational Resources Information Center

    Zhang, Jin

    2000-01-01

    Discussion of visualization for information retrieval, that transforms unseen internal semantic representation of a document collection into visible geometric displays, focuses on DARE (Distance Angle Retrieval Environment). Highlights include expression of information need; interpretation and manipulation of information retrieval models; ranking…

  20. Attentional Episodes in Visual Perception

    ERIC Educational Resources Information Center

    Wyble, Brad; Potter, Mary C.; Bowman, Howard; Nieuwenstein, Mark

    2011-01-01

    Is one's temporal perception of the world truly as seamless as it appears? This article presents a computationally motivated theory suggesting that visual attention samples information from temporal episodes (episodic simultaneous type/serial token model; Wyble, Bowman, & Nieuwenstein, 2009). Breaks between these episodes are punctuated by periods…

  1. Introduction to Vector Field Visualization

    NASA Technical Reports Server (NTRS)

    Kao, David; Shen, Han-Wei

    2010-01-01

    Vector field visualization techniques are essential to help us understand the complex dynamics of flow fields. These can be found in a wide range of applications such as study of flows around an aircraft, the blood flow in our heart chambers, ocean circulation models, and severe weather predictions. The vector fields from these various applications can be visually depicted using a number of techniques such as particle traces and advecting textures. In this tutorial, we present several fundamental algorithms in flow visualization including particle integration, particle tracking in time-dependent flows, and seeding strategies. For flows near surfaces, a wide variety of synthetic texture-based algorithms have been developed to depict near-body flow features. The most common approach is based on the Line Integral Convolution (LIC) algorithm. There also exist extensions of LIC to support more flexible texture generations for 3D flow data. This tutorial reviews these algorithms. Tensor fields are found in several real-world applications and also require the aid of visualization to help users understand their data sets. Examples where one can find tensor fields include mechanics to see how material respond to external forces, civil engineering and geomechanics of roads and bridges, and the study of neural pathway via diffusion tensor imaging. This tutorial will provide an overview of the different tensor field visualization techniques, discuss basic tensor decompositions, and go into detail on glyph based methods, deformation based methods, and streamline based methods. Practical examples will be used when presenting the methods; and applications from some case studies will be used as part of the motivation.

  2. Neural pathways for visual speech perception

    PubMed Central

    Bernstein, Lynne E.; Liebenthal, Einat

    2014-01-01

    This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns of activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA. PMID:25520611

  3. Comparative Visualization of Climate Simulation Data

    NASA Astrophysics Data System (ADS)

    Röber, Niklas; Meier-Fleischer, Karin; Böttinger, Michael

    2014-05-01

    Visualization is the process of transforming abstract (scientific) data into a graphical representation, to aid in the understanding of the information contained within the data. Climate data sets are typically quite large, time varying, and consist of many different variables that are sampled on an underlying grid. A variety of different climate models - and sub models - are developed to simulate the climate system and its components, such as the physics of the atmosphere and the ocean, marine biogeochemical processes and the land biosphere. Visualization software is used to assist in the process of visualization and data analysis by transforming the abstract numerical information into a graphical illustration. Different approaches exist in the design of visualization software and for the process of visualization itself, depending on the type and nature of the data as well as on the visualization goal. In addition to a large high performance compute cluster that is exclusively used for climate simulations, the German Climate Computing Centre (DKRZ) also hosts a dedicated visualization cluster for post-processing, data analysis and visualization. On this visualization server, a variety of software is installed to assist the user in the data visualization task. Amongst others, the software stack includes Avizo Green, CDO, NCL, Paraview and SimVis. Each tool has its own strengths and weaknesses, and is selected by the user with regard to the visualization goal. While Avizo Green is great for visualizing the data out of the box, SimVis and Paraview are better suited for an interactive and explorative data analysis. This PICO presentation uses several different visualization solutions - among them Avizo Green, NCL, Paraview and SimVis - to analyze and visualize the same climate data set. We will thereby explicitly focus on each software's strengths, and not highlight its weaknesses. This PICO interactively shows that - depending on the visualization tool used - not

  4. Testing Visual Functions in Patients with Visual Prostheses

    NASA Astrophysics Data System (ADS)

    Wilke, Robert; Bach, Michael; Wilhelm, Barbara; Durst, Wilhelm; Trauzettel-Klosinski, Susanne; Zrenner, Eberhart

    A number of different technical devices for restoring vision in blind patients have been proposed to date. They employ different strategies for the acquisition of optical information, image processing, and electrical stimulation. Devices with external cameras or with integrated components for light detection have been developed and are designed to stimulate such different sites as the retina, optic nerve, and cortex. First clinical trials for these devices are being planned or already underway. As vision with these artificial vision devices (AVDs) may differ considerably from natural vision and as it may not be possible to predict visual functions provided by such devices on the basis of technical specifications alone, novel test strategies are needed to comprehensively describe visual performance. We propose a battery of tests for standardized well-controlled investigations in these patients that allow for objective assessment of efficacy of these devices.

  5. Predicting visual acuity from the structure of visual cortex

    PubMed Central

    Srinivasan, Shyam; Carlo, C. Nikoosh; Stevens, Charles F.

    2015-01-01

    Three decades ago, Rockel et al. proposed that neuronal surface densities (number of neurons under a square millimeter of surface) of primary visual cortices (V1s) in primates is 2.5 times higher than the neuronal density of V1s in nonprimates or many other cortical regions in primates and nonprimates. This claim has remained controversial and much debated. We replicated the study of Rockel et al. with attention to modern stereological precepts and show that indeed primate V1 is 2.5 times denser (number of neurons per square millimeter) than many other cortical regions and nonprimate V1s; we also show that V2 is 1.7 times as dense. As primate V1s are denser, they have more neurons and thus more pinwheels than similar-sized nonprimate V1s, which explains why primates have better visual acuity. PMID:26056277

  6. Phototaxis and the origin of visual eyes

    PubMed Central

    Randel, Nadine

    2016-01-01

    Vision allows animals to detect spatial differences in environmental light levels. High-resolution image-forming eyes evolved from low-resolution eyes via increases in photoreceptor cell number, improvements in optics and changes in the neural circuits that process spatially resolved photoreceptor input. However, the evolutionary origins of the first low-resolution visual systems have been unclear. We propose that the lowest resolving (two-pixel) visual systems could initially have functioned in visual phototaxis. During visual phototaxis, such elementary visual systems compare light on either side of the body to regulate phototactic turns. Another, even simpler and non-visual strategy is characteristic of helical phototaxis, mediated by sensory–motor eyespots. The recent mapping of the complete neural circuitry (connectome) of an elementary visual system in the larva of the annelid Platynereis dumerilii sheds new light on the possible paths from non-visual to visual phototaxis and to image-forming vision. We outline an evolutionary scenario focusing on the neuronal circuitry to account for these transitions. We also present a comprehensive review of the structure of phototactic eyes in invertebrate larvae and assign them to the non-visual and visual categories. We propose that non-visual systems may have preceded visual phototactic systems in evolution that in turn may have repeatedly served as intermediates during the evolution of image-forming eyes. PMID:26598725

  7. Phototaxis and the origin of visual eyes.

    PubMed

    Randel, Nadine; Jékely, Gáspár

    2016-01-05

    Vision allows animals to detect spatial differences in environmental light levels. High-resolution image-forming eyes evolved from low-resolution eyes via increases in photoreceptor cell number, improvements in optics and changes in the neural circuits that process spatially resolved photoreceptor input. However, the evolutionary origins of the first low-resolution visual systems have been unclear. We propose that the lowest resolving (two-pixel) visual systems could initially have functioned in visual phototaxis. During visual phototaxis, such elementary visual systems compare light on either side of the body to regulate phototactic turns. Another, even simpler and non-visual strategy is characteristic of helical phototaxis, mediated by sensory-motor eyespots. The recent mapping of the complete neural circuitry (connectome) of an elementary visual system in the larva of the annelid Platynereis dumerilii sheds new light on the possible paths from non-visual to visual phototaxis and to image-forming vision. We outline an evolutionary scenario focusing on the neuronal circuitry to account for these transitions. We also present a comprehensive review of the structure of phototactic eyes in invertebrate larvae and assign them to the non-visual and visual categories. We propose that non-visual systems may have preceded visual phototactic systems in evolution that in turn may have repeatedly served as intermediates during the evolution of image-forming eyes.

  8. Separate visual representations for perception and for visually guided behavior

    NASA Technical Reports Server (NTRS)

    Bridgeman, Bruce

    1989-01-01

    Converging evidence from several sources indicates that two distinct representations of visual space mediate perception and visually guided behavior, respectively. The two maps of visual space follow different rules; spatial values in either one can be biased without affecting the other. Ordinarily the two maps give equivalent responses because both are veridically in register with the world; special techniques are required to pull them apart. One such technique is saccadic suppression: small target displacements during saccadic eye movements are not preceived, though the displacements can change eye movements or pointing to the target. A second way to separate cognitive and motor-oriented maps is with induced motion: a slowly moving frame will make a fixed target appear to drift in the opposite direction, while motor behavior toward the target is unchanged. The same result occurs with stroboscopic induced motion, where the frame jump abruptly and the target seems to jump in the opposite direction. A third method of separating cognitive and motor maps, requiring no motion of target, background or eye, is the Roelofs effect: a target surrounded by an off-center rectangular frame will appear to be off-center in the direction opposite the frame. Again the effect influences perception, but in half of the subjects it does not influence pointing to the target. This experience also reveals more characteristics of the maps and their interactions with one another, the motor map apparently has little or no memory, and must be fed from the biased cognitive map if an enforced delay occurs between stimulus presentation and motor response. In designing spatial displays, the results mean that what you see isn't necessarily what you get. Displays must be designed with either perception or visually guided behavior in mind.

  9. Classifying EEG Signals during Stereoscopic Visualization to Estimate Visual Comfort.

    PubMed

    Frey, Jérémy; Appriou, Aurélien; Lotte, Fabien; Hachet, Martin

    2016-01-01

    With stereoscopic displays a sensation of depth that is too strong could impede visual comfort and may result in fatigue or pain. We used Electroencephalography (EEG) to develop a novel brain-computer interface that monitors users' states in order to reduce visual strain. We present the first system that discriminates comfortable conditions from uncomfortable ones during stereoscopic vision using EEG. In particular, we show that either changes in event-related potentials' (ERPs) amplitudes or changes in EEG oscillations power following stereoscopic objects presentation can be used to estimate visual comfort. Our system reacts within 1 s to depth variations, achieving 63% accuracy on average (up to 76%) and 74% on average when 7 consecutive variations are measured (up to 93%). Performances are stable (≈62.5%) when a simplified signal processing is used to simulate online analyses or when the number of EEG channels is lessened. This study could lead to adaptive systems that automatically suit stereoscopic displays to users and viewing conditions. For example, it could be possible to match the stereoscopic effect with users' state by modifying the overlap of left and right images according to the classifier output.

  10. Classifying EEG Signals during Stereoscopic Visualization to Estimate Visual Comfort

    PubMed Central

    Frey, Jérémy; Appriou, Aurélien; Lotte, Fabien; Hachet, Martin

    2016-01-01

    With stereoscopic displays a sensation of depth that is too strong could impede visual comfort and may result in fatigue or pain. We used Electroencephalography (EEG) to develop a novel brain-computer interface that monitors users' states in order to reduce visual strain. We present the first system that discriminates comfortable conditions from uncomfortable ones during stereoscopic vision using EEG. In particular, we show that either changes in event-related potentials' (ERPs) amplitudes or changes in EEG oscillations power following stereoscopic objects presentation can be used to estimate visual comfort. Our system reacts within 1 s to depth variations, achieving 63% accuracy on average (up to 76%) and 74% on average when 7 consecutive variations are measured (up to 93%). Performances are stable (≈62.5%) when a simplified signal processing is used to simulate online analyses or when the number of EEG channels is lessened. This study could lead to adaptive systems that automatically suit stereoscopic displays to users and viewing conditions. For example, it could be possible to match the stereoscopic effect with users' state by modifying the overlap of left and right images according to the classifier output. PMID:26819580

  11. Visual Image Sensor Organ Replacement: Implementation

    NASA Technical Reports Server (NTRS)

    Maluf, A. David (Inventor)

    2011-01-01

    Method and system for enhancing or extending visual representation of a selected region of a visual image, where visual representation is interfered with or distorted, by supplementing a visual signal with at least one audio signal having one or more audio signal parameters that represent one or more visual image parameters, such as vertical and/or horizontal location of the region; region brightness; dominant wavelength range of the region; change in a parameter value that characterizes the visual image, with respect to a reference parameter value; and time rate of change in a parameter value that characterizes the visual image. Region dimensions can be changed to emphasize change with time of a visual image parameter.

  12. Data Visualization and Animation Lab (DVAL): Applications

    NASA Technical Reports Server (NTRS)

    Severance, Kurt; Weisenborn, Mike

    1994-01-01

    A wide variety of software tools has been successfully used to visualize, analyze and present computational and experimental data at Langley Research Center. These tools can be categorized according to five primary uses: (1) 2-D image analysis, (2) conventional 3-D visualization, (3) volume visualization, (4) photo-realistic rendering, or (5) special purpose applications. Software is accessible in each of these categories for Langley personnel, and training or consultation can be arranged with the Data Visualization and Animation laboratory staff.

  13. Complex visual hallucinations in the hemianopic field.

    PubMed Central

    Kölmel, H W

    1985-01-01

    From 120 patients with an homonymous hemianopia 16 experienced complex visual hallucinations in the hemianopic field. The brain lesion was located in the occipital lobe, though damage was not limited to this area. Complex hallucinations appeared after a latent period. They were weak in colour and stereotypical in appearance, which allowed differentiation from visual hallucinations of other causes. Different behaviour after saccadic eye movement differentiated between complex visual hallucinations in the hemianopic field and visual auras of an epileptic origin. PMID:3973619

  14. Surface flow visualization using indicators

    NASA Technical Reports Server (NTRS)

    Crowder, J. P.

    1982-01-01

    Surface flow visualization using indicators in the cryogenic wind tunnel which requires a fresh look at materials and procedures to accommodate the new test conditions is described. Potential liquid and gaseous indicators are identified. The particular materials illustrate the various requirements an indicator must fulfill. The indicator must respond properly to the flow phenomenon of interest and must be observable. Boundary layer transition is the most important phenomenon for which flow visualization indicators may be employed. The visibility of a particular indicator depends on utilizing various optical or chemical reactions. Gaseous indicators are more difficult to utilize, but because of their diversity may present unusual and useful opportunities. Factors to be considered in selecting an indicator include handling safety, toxicity, potential for contamination of the tunnel, and cost.

  15. An interpersonal multimedia visualization system

    SciTech Connect

    Phillips, R.L.

    1990-01-01

    Media View is a computer program that provides a generic infrastructure for authoring and interacting with multimedia documents. Among its many applications is the ability to furnish a user with a comprehensive environment for analysis and visualization. With MediaView the user produces a document'' that contains mathematics, datasets and associated visualizations. From the dataset or embedded mathematics animated sequences can be produced in situ. The mathematical content of the document'' can be explored through manipulation with Mathematica {trademark}. Since the document'' is all digital, it can be shared with a co-worker on a local network or mailed electronically to a colleague at a distant site. Animations and any other substructure of the document'' persist through the mailing process and can be awakened at the destination by the recipient. 5 refs., 4 figs.

  16. Visualization drivers for Geant4

    SciTech Connect

    Beretvas, Andy; /Fermilab

    2005-10-01

    This document is on Geant4 visualization tools (drivers), evaluating pros and cons of each option, including recommendations on which tools to support at Fermilab for different applications. Four visualization drivers are evaluated. They are OpenGL, HepRep, DAWN and VRML. They all have good features, OpenGL provides graphic output without an intermediate file. HepRep provides menus to assist the user. DAWN provides high quality plots and even for large files produces output quickly. VRML uses the smallest disk space for intermediate files. Large experiments at Fermilab will want to write their own display. They should proceed to make this display graphics independent. Medium experiment will probably want to use HepRep because of it's menu support. Smaller scale experiments will want to use OpenGL in the spirit of having immediate response, good quality output and keeping things simple.

  17. Enhancing AFLOW Visualization using Jmol

    NASA Astrophysics Data System (ADS)

    Lanasa, Jacob; New, Elizabeth; Stefek, Patrik; Honaker, Brigette; Hanson, Robert; Aflow Collaboration

    The AFLOW library is a database of theoretical solid-state structures and calculated properties created using high-throughput ab initio calculations. Jmol is a Java-based program capable of visualizing and analyzing complex molecular structures and energy landscapes. In collaboration with the AFLOW consortium, our goal is the enhancement of the AFLOWLIB database through the extension of Jmol's capabilities in the area of materials science. Modifications made to Jmol include the ability to read and visualize AFLOW binary alloy data files, the ability to extract from these files information using Jmol scripting macros that can be utilized in the creation of interactive web-based convex hull graphs, the capability to identify and classify local atomic environments by symmetry, and the ability to search one or more related crystal structures for atomic environments using a novel extension of inorganic polyhedron-based SMILES strings

  18. Aesthetic valence of visual illusions

    PubMed Central

    Stevanov, Jasmina; Marković, Slobodan; Kitaoka, Akiyoshi

    2012-01-01

    Visual illusions constitute an interesting perceptual phenomenon, but they also have an aesthetic and affective dimension. We hypothesized that the illusive nature itself causes the increased aesthetic and affective valence of illusions compared with their non-illusory counterparts. We created pairs of stimuli. One qualified as a standard visual illusion whereas the other one did not, although they were matched in as many perceptual dimensions as possible. The phenomenal quality of being an illusion had significant effects on “Aesthetic Experience” (fascinating, irresistible, exceptional, etc), “Evaluation” (pleasant, cheerful, clear, bright, etc), “Arousal” (interesting, imaginative, complex, diverse, etc), and “Regularity” (balanced, coherent, clear, realistic, etc). A subsequent multiple regression analysis suggested that Arousal was a better predictor of Aesthetic Experience than Evaluation. The findings of this study demonstrate that illusion is a phenomenal quality of the percept which has measurable aesthetic and affective valence. PMID:23145272

  19. Audio-visual simultaneity judgments.

    PubMed

    Zampini, Massimiliano; Guest, Steve; Shore, David I; Spence, Charles

    2005-04-01

    The relative spatiotemporal correspondence between sensory events affects multisensory integration across a variety of species; integration is maximal when stimuli in different sensory modalities are presented from approximately the same position at about the same time. In the present study, we investigated the influence of spatial and temporal factors on audio-visual simultaneity perception in humans. Participants made unspeeded simultaneous versus successive discrimination responses to pairs of auditory and visual stimuli presented at varying stimulus onset asynchronies from either the same or different spatial positions using either the method of constant stimuli (Experiments 1 and 2) or psychophysical staircases (Experiment 3). The participants in all three experiments were more likely to report the stimuli as being simultaneous when they originated from the same spatial position than when they came from different positions, demonstrating that the apparent perception of multisensory simultaneity is dependent on the relative spatial position from which stimuli are presented.

  20. Early development of visual recognition.

    PubMed

    Plebe, Alessio; Domenella, Rosaria Grazia

    2006-01-01

    The most important ability of the human vision is object recognition, yet it is exactly the less understood aspect of the vision system. Computational models have been helpful in progressing towards an explanation of this obscure cognitive ability, and today it is possible to conceive more refined models, thanks to the new availability of neuroscientific data about the human visual cortex. This work proposes a model of the development of the object recognition capability, under a different perspective with respect to the most common approaches, with a precise theoretical epistemology. It is assumed that the main processing functions involved in recognition are not genetically determined and hardwired in the neural circuits, but are the result of interactions between epigenetic influences and the basic neural plasticity mechanisms. The model is organized in modules related with the main visual biological areas, and is implemented mainly using the LISSOM architecture, a recent self-organizing algorithm closely reflecting the essential behavior of cortical circuits.