ERIC Educational Resources Information Center
Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Friederici, Angela D.
2016-01-01
Successful communication in everyday life crucially involves the processing of auditory and visual components of speech. Viewing our interlocutor and processing visual components of speech facilitates speech processing by triggering auditory processing. Auditory phoneme processing, analyzed by event-related brain potentials (ERP), has been shown…
Foerster, Rebecca M.; Poth, Christian H.; Behler, Christian; Botsch, Mario; Schneider, Werner X.
2016-01-01
Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen’s visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions. PMID:27869220
Foerster, Rebecca M; Poth, Christian H; Behler, Christian; Botsch, Mario; Schneider, Werner X
2016-11-21
Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen's visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions.
Visualization of Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Gerald-Yamasaki, Michael; Hultquist, Jeff; Bryson, Steve; Kenwright, David; Lane, David; Walatka, Pamela; Clucas, Jean; Watson, Velvin; Lasinski, T. A. (Technical Monitor)
1995-01-01
Scientific visualization serves the dual purpose of exploration and exposition of the results of numerical simulations of fluid flow. Along with the basic visualization process which transforms source data into images, there are four additional components to a complete visualization system: Source Data Processing, User Interface and Control, Presentation, and Information Management. The requirements imposed by the desired mode of operation (i.e. real-time, interactive, or batch) and the source data have their effect on each of these visualization system components. The special requirements imposed by the wide variety and size of the source data provided by the numerical simulation of fluid flow presents an enormous challenge to the visualization system designer. We describe the visualization system components including specific visualization techniques and how the mode of operation and source data requirements effect the construction of computational fluid dynamics visualization systems.
Neuropsychological Components of Imagery Processing, Final Technical Report.
ERIC Educational Resources Information Center
Kosslyn, Stephen M.
High-level visual processes make use of stored information, and are invoked during object identification, navigation, tracking, and visual mental imagery. The work presented in this document has resulted in a theory of the component "processing subsystems" used in high-level vision. This theory was developed by considering…
Li, W; Lai, T M; Bohon, C; Loo, S K; McCurdy, D; Strober, M; Bookheimer, S; Feusner, J
2015-07-01
Anorexia nervosa (AN) and body dysmorphic disorder (BDD) are characterized by distorted body image and are frequently co-morbid with each other, although their relationship remains little studied. While there is evidence of abnormalities in visual and visuospatial processing in both disorders, no study has directly compared the two. We used two complementary modalities--event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI)--to test for abnormal activity associated with early visual signaling. We acquired fMRI and ERP data in separate sessions from 15 unmedicated individuals in each of three groups (weight-restored AN, BDD, and healthy controls) while they viewed images of faces and houses of different spatial frequencies. We used joint independent component analyses to compare activity in visual systems. AN and BDD groups demonstrated similar hypoactivity in early secondary visual processing regions and the dorsal visual stream when viewing low spatial frequency faces, linked to the N170 component, as well as in early secondary visual processing regions when viewing low spatial frequency houses, linked to the P100 component. Additionally, the BDD group exhibited hyperactivity in fusiform cortex when viewing high spatial frequency houses, linked to the N170 component. Greater activity in this component was associated with lower attractiveness ratings of faces. Results provide preliminary evidence of similar abnormal spatiotemporal activation in AN and BDD for configural/holistic information for appearance- and non-appearance-related stimuli. This suggests a common phenotype of abnormal early visual system functioning, which may contribute to perceptual distortions.
Segalowitz, Sidney J; Sternin, Avital; Lewis, Terri L; Dywan, Jane; Maurer, Daphne
2017-04-01
We examined the role of early visual input in visual system development by testing adults who had been born with dense bilateral cataracts that blocked all patterned visual input during infancy until the cataractous lenses were removed surgically and the eyes fitted with compensatory contact lenses. Patients viewed checkerboards and textures to explore early processing regions (V1, V2), Glass patterns to examine global form processing (V4), and moving stimuli to explore global motion processing (V5). Patients' ERPs differed from those of controls in that (1) the V1 component was much smaller for all but the simplest stimuli and (2) extrastriate components did not differentiate amongst texture stimuli, Glass patterns, or motion stimuli. The results indicate that early visual deprivation contributes to permanent abnormalities at early and mid levels of visual processing, consistent with enduring behavioral deficits in the ability to process complex textures, global form, and global motion. © 2017 Wiley Periodicals, Inc.
Shen, Mowei; Xu, Haokui; Zhang, Haihang; Shui, Rende; Zhang, Meng; Zhou, Jifan
2015-08-01
Visual working memory (VWM) has been traditionally viewed as a mental structure subsequent to visual perception that stores the final output of perceptual processing. However, VWM has recently been emphasized as a critical component of online perception, providing storage for the intermediate perceptual representations produced during visual processing. This interactive view holds the core assumption that VWM is not the terminus of perceptual processing; the stored visual information rather continues to undergo perceptual processing if necessary. The current study tests this assumption, demonstrating an example of involuntary integration of the VWM content, by creating the Ponzo illusion in VWM: when the Ponzo illusion figure was divided into its individual components and sequentially encoded into VWM, the temporally separated components were involuntarily integrated, leading to the distorted length perception of the two horizontal lines. This VWM Ponzo illusion was replicated when the figure components were presented in different combinations and presentation order. The magnitude of the illusion was significantly correlated between VWM and perceptual versions of the Ponzo illusion. These results suggest that the information integration underling the VWM Ponzo illusion is constrained by the laws of visual perception and similarly affected by the common individual factors that govern its perception. Thus, our findings provide compelling evidence that VWM functions as a buffer serving perceptual processes at early stages. Copyright © 2015 Elsevier B.V. All rights reserved.
NMRPro: an integrated web component for interactive processing and visualization of NMR spectra.
Mohamed, Ahmed; Nguyen, Canh Hao; Mamitsuka, Hiroshi
2016-07-01
The popularity of using NMR spectroscopy in metabolomics and natural products has driven the development of an array of NMR spectral analysis tools and databases. Particularly, web applications are well used recently because they are platform-independent and easy to extend through reusable web components. Currently available web applications provide the analysis of NMR spectra. However, they still lack the necessary processing and interactive visualization functionalities. To overcome these limitations, we present NMRPro, a web component that can be easily incorporated into current web applications, enabling easy-to-use online interactive processing and visualization. NMRPro integrates server-side processing with client-side interactive visualization through three parts: a python package to efficiently process large NMR datasets on the server-side, a Django App managing server-client interaction, and SpecdrawJS for client-side interactive visualization. Demo and installation instructions are available at http://mamitsukalab.org/tools/nmrpro/ mohamed@kuicr.kyoto-u.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Clark, Kait; Appelbaum, L Gregory; van den Berg, Berry; Mitroff, Stephen R; Woldorff, Marty G
2015-04-01
Practice can improve performance on visual search tasks; the neural mechanisms underlying such improvements, however, are not clear. Response time typically shortens with practice, but which components of the stimulus-response processing chain facilitate this behavioral change? Improved search performance could result from enhancements in various cognitive processing stages, including (1) sensory processing, (2) attentional allocation, (3) target discrimination, (4) motor-response preparation, and/or (5) response execution. We measured event-related potentials (ERPs) as human participants completed a five-day visual-search protocol in which they reported the orientation of a color popout target within an array of ellipses. We assessed changes in behavioral performance and in ERP components associated with various stages of processing. After practice, response time decreased in all participants (while accuracy remained consistent), and electrophysiological measures revealed modulation of several ERP components. First, amplitudes of the early sensory-evoked N1 component at 150 ms increased bilaterally, indicating enhanced visual sensory processing of the array. Second, the negative-polarity posterior-contralateral component (N2pc, 170-250 ms) was earlier and larger, demonstrating enhanced attentional orienting. Third, the amplitude of the sustained posterior contralateral negativity component (SPCN, 300-400 ms) decreased, indicating facilitated target discrimination. Finally, faster motor-response preparation and execution were observed after practice, as indicated by latency changes in both the stimulus-locked and response-locked lateralized readiness potentials (LRPs). These electrophysiological results delineate the functional plasticity in key mechanisms underlying visual search with high temporal resolution and illustrate how practice influences various cognitive and neural processing stages leading to enhanced behavioral performance. Copyright © 2015 the authors 0270-6474/15/355351-09$15.00/0.
Object-Based Visual Attention in 8-Month-Old Infants: Evidence from an Eye-Tracking Study
ERIC Educational Resources Information Center
Bulf, Hermann; Valenza, Eloisa
2013-01-01
Visual attention is one of the infant's primary tools for gathering relevant information from the environment for further processing and learning. The space-based component of visual attention in infants has been widely investigated; however, the object-based component of visual attention has received scarce interest. This scarcity is…
Kamitani, Toshiaki; Kuroiwa, Yoshiyuki
2009-01-01
Recent studies demonstrated an altered P3 component and prolonged reaction time during the visual discrimination tasks in multiple system atrophy (MSA). In MSA, however, little is known about the N2 component which is known to be closely related to the visual discrimination process. We therefore compared the N2 component as well as the N1 and P3 components in 17 MSA patients with these components in 10 normal controls, by using a visual selective attention task to color or to shape. While the P3 in MSA was significantly delayed in selective attention to shape, the N2 in MSA was significantly delayed in selective attention to color. N1 was normally preserved both in attention to color and in attention to shape. Our electrophysiological results indicate that the color discrimination process during selective attention is impaired in MSA.
Fuggetta, Giorgio; Duke, Philip A
2017-05-01
The operation of attention on visible objects involves a sequence of cognitive processes. The current study firstly aimed to elucidate the effects of practice on neural mechanisms underlying attentional processes as measured with both behavioural and electrophysiological measures. Secondly, it aimed to identify any pattern in the relationship between Event-Related Potential (ERP) components which play a role in the operation of attention in vision. Twenty-seven participants took part in two recording sessions one week apart, performing an experimental paradigm which combined a match-to-sample task with a memory-guided efficient visual-search task within one trial sequence. Overall, practice decreased behavioural response times, increased accuracy, and modulated several ERP components that represent cognitive and neural processing stages. This neuromodulation through practice was also associated with an enhanced link between behavioural measures and ERP components and with an enhanced cortico-cortical interaction of functionally interconnected ERP components. Principal component analysis (PCA) of the ERP amplitude data revealed three components, having different rostro-caudal topographic representations. The first component included both the centro-parietal and parieto-occipital mismatch triggered negativity - involved in integration of visual representations of the target with current task-relevant representations stored in visual working memory - loaded with second negative posterior-bilateral (N2pb) component, involved in categorising specific pop-out target features. The second component comprised the amplitude of bilateral anterior P2 - related to detection of a specific pop-out feature - loaded with bilateral anterior N2, related to detection of conflicting features, and fronto-central mismatch triggered negativity. The third component included the parieto-occipital N1 - related to early neural responses to the stimulus array - which loaded with the second negative posterior-contralateral (N2pc) component, mediating the process of orienting and focusing covert attention on peripheral target features. We discussed these three components as representing different neurocognitive systems modulated with practice within which the input selection process operates. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Determining the Requisite Components of Visual Threat Detection to Improve Operational Performance
2014-04-01
cognitive processes, and may be enhanced by focusing training development on the principle components such as causal reasoning. The second report will...discuss the development and evaluation of a research-based training exemplar. Visual threat detection pervades many military contexts, but is also... developing computer-controlled exercises to study the primary components of visual threat detection. Similarly, civilian law enforcement officers were
Vergauwe, Evie; Barrouillet, Pierre; Camos, Valérie
2009-07-01
Examinations of interference between visual and spatial materials in working memory have suggested domain- and process-based fractionations of visuo-spatial working memory. The present study examined the role of central time-based resource sharing in visuo-spatial working memory and assessed its role in obtained interference patterns. Visual and spatial storage were combined with both visual and spatial on-line processing components in computer-paced working memory span tasks (Experiment 1) and in a selective interference paradigm (Experiment 2). The cognitive load of the processing components was manipulated to investigate its impact on concurrent maintenance for both within-domain and between-domain combinations of processing and storage components. In contrast to both domain- and process-based fractionations of visuo-spatial working memory, the results revealed that recall performance was determined by the cognitive load induced by the processing of items, rather than by the domain to which those items pertained. These findings are interpreted as evidence for a time-based resource-sharing mechanism in visuo-spatial working memory.
Horowitz-Kraus, Tzipi; DiFrancesco, Mark; Kay, Benjamin; Wang, Yingying; Holland, Scott K.
2015-01-01
The Reading Acceleration Program, a computerized reading-training program, increases activation in neural circuits related to reading. We examined the effect of the training on the functional connectivity between independent components related to visual processing, executive functions, attention, memory, and language during rest after the training. Children 8–12 years old with reading difficulties and typical readers participated in the study. Behavioral testing and functional magnetic resonance imaging were performed before and after the training. Imaging data were analyzed using an independent component analysis approach. After training, both reading groups showed increased single-word contextual reading and reading comprehension scores. Greater positive correlations between the visual-processing component and the executive functions, attention, memory, or language components were found after training in children with reading difficulties. Training-related increases in connectivity between the visual and attention components and between the visual and executive function components were positively correlated with increased word reading and reading comprehension, respectively. Our findings suggest that the effect of the Reading Acceleration Program on basic cognitive domains can be detected even in the absence of an ongoing reading task. PMID:26199874
Horowitz-Kraus, Tzipi; DiFrancesco, Mark; Kay, Benjamin; Wang, Yingying; Holland, Scott K
2015-01-01
The Reading Acceleration Program, a computerized reading-training program, increases activation in neural circuits related to reading. We examined the effect of the training on the functional connectivity between independent components related to visual processing, executive functions, attention, memory, and language during rest after the training. Children 8-12 years old with reading difficulties and typical readers participated in the study. Behavioral testing and functional magnetic resonance imaging were performed before and after the training. Imaging data were analyzed using an independent component analysis approach. After training, both reading groups showed increased single-word contextual reading and reading comprehension scores. Greater positive correlations between the visual-processing component and the executive functions, attention, memory, or language components were found after training in children with reading difficulties. Training-related increases in connectivity between the visual and attention components and between the visual and executive function components were positively correlated with increased word reading and reading comprehension, respectively. Our findings suggest that the effect of the Reading Acceleration Program on basic cognitive domains can be detected even in the absence of an ongoing reading task.
Alerts Analysis and Visualization in Network-based Intrusion Detection Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Dr. Li
2010-08-01
The alerts produced by network-based intrusion detection systems, e.g. Snort, can be difficult for network administrators to efficiently review and respond to due to the enormous number of alerts generated in a short time frame. This work describes how the visualization of raw IDS alert data assists network administrators in understanding the current state of a network and quickens the process of reviewing and responding to intrusion attempts. The project presented in this work consists of three primary components. The first component provides a visual mapping of the network topology that allows the end-user to easily browse clustered alerts. Themore » second component is based on the flocking behavior of birds such that birds tend to follow other birds with similar behaviors. This component allows the end-user to see the clustering process and provides an efficient means for reviewing alert data. The third component discovers and visualizes patterns of multistage attacks by profiling the attacker s behaviors.« less
Unsupervised Neural Network Quantifies the Cost of Visual Information Processing.
Orbán, Levente L; Chartier, Sylvain
2015-01-01
Untrained, "flower-naïve" bumblebees display behavioural preferences when presented with visual properties such as colour, symmetry, spatial frequency and others. Two unsupervised neural networks were implemented to understand the extent to which these models capture elements of bumblebees' unlearned visual preferences towards flower-like visual properties. The computational models, which are variants of Independent Component Analysis and Feature-Extracting Bidirectional Associative Memory, use images of test-patterns that are identical to ones used in behavioural studies. Each model works by decomposing images of floral patterns into meaningful underlying factors. We reconstruct the original floral image using the components and compare the quality of the reconstructed image to the original image. Independent Component Analysis matches behavioural results substantially better across several visual properties. These results are interpreted to support a hypothesis that the temporal and energetic costs of information processing by pollinators served as a selective pressure on floral displays: flowers adapted to pollinators' cognitive constraints.
Visual Aspects of Written Composition.
ERIC Educational Resources Information Center
Autrey, Ken
While attempting to refine and redefine the composing process, rhetoric teachers have overlooked research showing how the brain's visual and verbal components interrelate. Recognition of the brain's visual potential can mean more than the use of media with the written word--it also has implications for the writing process itself. For example,…
Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2008-09-16
Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.
Künstler, E C S; Finke, K; Günther, A; Klingner, C; Witte, O; Bublak, P
2018-01-01
Dual tasking, or the simultaneous execution of two continuous tasks, is frequently associated with a performance decline that can be explained within a capacity sharing framework. In this study, we assessed the effects of a concurrent motor task on the efficiency of visual information uptake based on the 'theory of visual attention' (TVA). TVA provides parameter estimates reflecting distinct components of visual processing capacity: perceptual threshold, visual processing speed, and visual short-term memory (VSTM) storage capacity. Moreover, goodness-of-fit values and bootstrapping estimates were derived to test whether the TVA-model is validly applicable also under dual task conditions, and whether the robustness of parameter estimates is comparable in single- and dual-task conditions. 24 subjects of middle to higher age performed a continuous tapping task, and a visual processing task (whole report of briefly presented letter arrays) under both single- and dual-task conditions. Results suggest a decline of both visual processing capacity and VSTM storage capacity under dual-task conditions, while the perceptual threshold remained unaffected by a concurrent motor task. In addition, goodness-of-fit values and bootstrapping estimates support the notion that participants processed the visual task in a qualitatively comparable, although quantitatively less efficient way under dual-task conditions. The results support a capacity sharing account of motor-cognitive dual tasking and suggest that even performing a relatively simple motor task relies on central attentional capacity that is necessary for efficient visual information uptake.
Schmetz, Emilie; Magis, David; Detraux, Jean-Jacques; Barisnikov, Koviljka; Rousselle, Laurence
2018-03-02
The present study aims to assess how the processing of basic visual perceptual (VP) components (length, surface, orientation, and position) develops in typically developing (TD) children (n = 215, 4-14 years old) and adults (n = 20, 20-25 years old), and in children with cerebral palsy (CP) (n = 86, 5-14 years old) using the first four subtests of the Battery for the Evaluation of Visual Perceptual and Spatial processing in children. Experiment 1 showed that these four basic VP processes follow distinct developmental trajectories in typical development. Experiment 2 revealed that children with CP present global and persistent deficits for the processing of basic VP components when compared with TD children matched on chronological age and nonverbal reasoning abilities.
Graphic Design in Libraries: A Conceptual Process
ERIC Educational Resources Information Center
Ruiz, Miguel
2014-01-01
Providing successful library services requires efficient and effective communication with users; therefore, it is important that content creators who develop visual materials understand key components of design and, specifically, develop a holistic graphic design process. Graphic design, as a form of visual communication, is the process of…
Bender, Stephan; Rellum, Thomas; Freitag, Christine; Resch, Franz; Rietschel, Marcella; Treutlein, Jens; Jennen-Steinmetz, Christine; Brandeis, Daniel; Banaschewski, Tobias; Laucht, Manfred
2012-01-01
Background Dopamine plays an important role in orienting and the regulation of selective attention to relevant stimulus characteristics. Thus, we examined the influences of functional variants related to dopamine inactivation in the dopamine transporter (DAT1) and catechol-O-methyltransferase genes (COMT) on the time-course of visual processing in a contingent negative variation (CNV) task. Methods 64-channel EEG recordings were obtained from 195 healthy adolescents of a community-based sample during a continuous performance task (A-X version). Early and late CNV as well as preceding visual evoked potential components were assessed. Results Significant additive main effects of DAT1 and COMT on the occipito-temporal early CNV were observed. In addition, there was a trend towards an interaction between the two polymorphisms. Source analysis showed early CNV generators in the ventral visual stream and in frontal regions. There was a strong negative correlation between occipito-temporal visual post-processing and the frontal early CNV component. The early CNV time interval 500–1000 ms after the visual cue was specifically affected while the preceding visual perception stages were not influenced. Conclusions Late visual potentials allow the genomic imaging of dopamine inactivation effects on visual post-processing. The same specific time-interval has been found to be affected by DAT1 and COMT during motor post-processing but not motor preparation. We propose the hypothesis that similar dopaminergic mechanisms modulate working memory encoding in both the visual and motor and perhaps other systems. PMID:22844499
Bender, Stephan; Rellum, Thomas; Freitag, Christine; Resch, Franz; Rietschel, Marcella; Treutlein, Jens; Jennen-Steinmetz, Christine; Brandeis, Daniel; Banaschewski, Tobias; Laucht, Manfred
2012-01-01
Dopamine plays an important role in orienting and the regulation of selective attention to relevant stimulus characteristics. Thus, we examined the influences of functional variants related to dopamine inactivation in the dopamine transporter (DAT1) and catechol-O-methyltransferase genes (COMT) on the time-course of visual processing in a contingent negative variation (CNV) task. 64-channel EEG recordings were obtained from 195 healthy adolescents of a community-based sample during a continuous performance task (A-X version). Early and late CNV as well as preceding visual evoked potential components were assessed. Significant additive main effects of DAT1 and COMT on the occipito-temporal early CNV were observed. In addition, there was a trend towards an interaction between the two polymorphisms. Source analysis showed early CNV generators in the ventral visual stream and in frontal regions. There was a strong negative correlation between occipito-temporal visual post-processing and the frontal early CNV component. The early CNV time interval 500-1000 ms after the visual cue was specifically affected while the preceding visual perception stages were not influenced. Late visual potentials allow the genomic imaging of dopamine inactivation effects on visual post-processing. The same specific time-interval has been found to be affected by DAT1 and COMT during motor post-processing but not motor preparation. We propose the hypothesis that similar dopaminergic mechanisms modulate working memory encoding in both the visual and motor and perhaps other systems.
Component processes underlying future thinking.
D'Argembeau, Arnaud; Ortoleva, Claudia; Jumentier, Sabrina; Van der Linden, Martial
2010-09-01
This study sought to investigate the component processes underlying the ability to imagine future events, using an individual-differences approach. Participants completed several tasks assessing different aspects of future thinking (i.e., fluency, specificity, amount of episodic details, phenomenology) and were also assessed with tasks and questionnaires measuring various component processes that have been hypothesized to support future thinking (i.e., executive processes, visual-spatial processing, relational memory processing, self-consciousness, and time perspective). The main results showed that executive processes were correlated with various measures of future thinking, whereas visual-spatial processing abilities and time perspective were specifically related to the number of sensory descriptions reported when specific future events were imagined. Furthermore, individual differences in self-consciousness predicted the subjective feeling of experiencing the imagined future events. These results suggest that future thinking involves a collection of processes that are related to different facets of future-event representation.
Jolij, Jacob; Scholte, H Steven; van Gaal, Simon; Hodgson, Timothy L; Lamme, Victor A F
2011-12-01
Humans largely guide their behavior by their visual representation of the world. Recent studies have shown that visual information can trigger behavior within 150 msec, suggesting that visually guided responses to external events, in fact, precede conscious awareness of those events. However, is such a view correct? By using a texture discrimination task, we show that the brain relies on long-latency visual processing in order to guide perceptual decisions. Decreasing stimulus saliency leads to selective changes in long-latency visually evoked potential components reflecting scene segmentation. These latency changes are accompanied by almost equal changes in simple RTs and points of subjective simultaneity. Furthermore, we find a strong correlation between individual RTs and the latencies of scene segmentation related components in the visually evoked potentials, showing that the processes underlying these late brain potentials are critical in triggering a response. However, using the same texture stimuli in an antisaccade task, we found that reflexive, but erroneous, prosaccades, but not antisaccades, can be triggered by earlier visual processes. In other words: The brain can act quickly, but decides late. Differences between our study and earlier findings suggesting that action precedes conscious awareness can be explained by assuming that task demands determine whether a fast and unconscious, or a slower and conscious, representation is used to initiate a visually guided response.
Xiao, Jianbo
2015-01-01
Segmenting visual scenes into distinct objects and surfaces is a fundamental visual function. To better understand the underlying neural mechanism, we investigated how neurons in the middle temporal cortex (MT) of macaque monkeys represent overlapping random-dot stimuli moving transparently in slightly different directions. It has been shown that the neuronal response elicited by two stimuli approximately follows the average of the responses elicited by the constituent stimulus components presented alone. In this scheme of response pooling, the ability to segment two simultaneously presented motion directions is limited by the width of the tuning curve to motion in a single direction. We found that, although the population-averaged neuronal tuning showed response averaging, subgroups of neurons showed distinct patterns of response tuning and were capable of representing component directions that were separated by a small angle—less than the tuning width to unidirectional stimuli. One group of neurons preferentially represented the component direction at a specific side of the bidirectional stimuli, weighting one stimulus component more strongly than the other. Another group of neurons pooled the component responses nonlinearly and showed two separate peaks in their tuning curves even when the average of the component responses was unimodal. We also show for the first time that the direction tuning of MT neurons evolved from initially representing the vector-averaged direction of slightly different stimuli to gradually representing the component directions. Our results reveal important neural processes underlying image segmentation and suggest that information about slightly different stimulus components is computed dynamically and distributed across neurons. SIGNIFICANCE STATEMENT Natural scenes often contain multiple entities. The ability to segment visual scenes into distinct objects and surfaces is fundamental to sensory processing and is crucial for generating the perception of our environment. Because cortical neurons are broadly tuned to a given visual feature, segmenting two stimuli that differ only slightly is a challenge for the visual system. In this study, we discovered that many neurons in the visual cortex are capable of representing individual components of slightly different stimuli by selectively and nonlinearly pooling the responses elicited by the stimulus components. We also show for the first time that the neural representation of individual stimulus components developed over a period of ∼70–100 ms, revealing a dynamic process of image segmentation. PMID:26658869
Pietrzak, Robert H; Scott, James Cobb; Harel, Brian T; Lim, Yen Ying; Snyder, Peter J; Maruff, Paul
2012-11-01
Alprazolam is a benzodiazepine that, when administered acutely, results in impairments in several aspects of cognition, including attention, learning, and memory. However, the profile (i.e., component processes) that underlie alprazolam-related decrements in visual paired associate learning has not been fully explored. In this double-blind, placebo-controlled, randomized cross-over study of healthy older adults, we used a novel, "process-based" computerized measure of visual paired associate learning to examine the effect of a single, acute 1-mg dose of alprazolam on component processes of visual paired associate learning and memory. Acute alprazolam challenge was associated with a large magnitude reduction in visual paired associate learning and memory performance (d = 1.05). Process-based analyses revealed significant increases in distractor, exploratory, between-search, and within-search error types. Analyses of percentages of each error type suggested that, relative to placebo, alprazolam challenge resulted in a decrease in the percentage of exploratory errors and an increase in the percentage of distractor errors, both of which reflect memory processes. Results of this study suggest that acute alprazolam challenge decreases visual paired associate learning and memory performance by reducing the strength of the association between pattern and location, which may reflect a general breakdown in memory consolidation, with less evidence of reductions in executive processes (e.g., working memory) that facilitate visual paired associate learning and memory. Copyright © 2012 John Wiley & Sons, Ltd.
Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion
Fajen, Brett R.; Matthis, Jonathan S.
2013-01-01
Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983
NCWin — A Component Object Model (COM) for processing and visualizing NetCDF data
Liu, Jinxun; Chen, J.M.; Price, D.T.; Liu, S.
2005-01-01
NetCDF (Network Common Data Form) is a data sharing protocol and library that is commonly used in large-scale atmospheric and environmental data archiving and modeling. The NetCDF tool described here, named NCWin and coded with Borland C + + Builder, was built as a standard executable as well as a COM (component object model) for the Microsoft Windows environment. COM is a powerful technology that enhances the reuse of applications (as components). Environmental model developers from different modeling environments, such as Python, JAVA, VISUAL FORTRAN, VISUAL BASIC, VISUAL C + +, and DELPHI, can reuse NCWin in their models to read, write and visualize NetCDF data. Some Windows applications, such as ArcGIS and Microsoft PowerPoint, can also call NCWin within the application. NCWin has three major components: 1) The data conversion part is designed to convert binary raw data to and from NetCDF data. It can process six data types (unsigned char, signed char, short, int, float, double) and three spatial data formats (BIP, BIL, BSQ); 2) The visualization part is designed for displaying grid map series (playing forward or backward) with simple map legend, and displaying temporal trend curves for data on individual map pixels; and 3) The modeling interface is designed for environmental model development by which a set of integrated NetCDF functions is provided for processing NetCDF data. To demonstrate that the NCWin can easily extend the functions of some current GIS software and the Office applications, examples of calling NCWin within ArcGIS and MS PowerPoint for showing NetCDF map animations are given.
Differential Gaze Patterns on Eyes and Mouth During Audiovisual Speech Segmentation
Lusk, Laina G.; Mitchel, Aaron D.
2016-01-01
Speech is inextricably multisensory: both auditory and visual components provide critical information for all aspects of speech processing, including speech segmentation, the visual components of which have been the target of a growing number of studies. In particular, a recent study (Mitchel and Weiss, 2014) established that adults can utilize facial cues (i.e., visual prosody) to identify word boundaries in fluent speech. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2014). Subjects spent the most time watching the eyes and mouth. A significant trend in gaze durations was found with the longest gaze duration on the mouth, followed by the eyes and then the nose. In addition, eye-gaze patterns changed across familiarization as subjects learned the word boundaries, showing decreased attention to the mouth in later blocks while attention on other facial features remained consistent. These findings highlight the importance of the visual component of speech processing and suggest that the mouth may play a critical role in visual speech segmentation. PMID:26869959
Visualization techniques to aid in the analysis of multi-spectral astrophysical data sets
NASA Technical Reports Server (NTRS)
Brugel, Edward W.; Domik, Gitta O.; Ayres, Thomas R.
1993-01-01
The goal of this project was to support the scientific analysis of multi-spectral astrophysical data by means of scientific visualization. Scientific visualization offers its greatest value if it is not used as a method separate or alternative to other data analysis methods but rather in addition to these methods. Together with quantitative analysis of data, such as offered by statistical analysis, image or signal processing, visualization attempts to explore all information inherent in astrophysical data in the most effective way. Data visualization is one aspect of data analysis. Our taxonomy as developed in Section 2 includes identification and access to existing information, preprocessing and quantitative analysis of data, visual representation and the user interface as major components to the software environment of astrophysical data analysis. In pursuing our goal to provide methods and tools for scientific visualization of multi-spectral astrophysical data, we therefore looked at scientific data analysis as one whole process, adding visualization tools to an already existing environment and integrating the various components that define a scientific data analysis environment. As long as the software development process of each component is separate from all other components, users of data analysis software are constantly interrupted in their scientific work in order to convert from one data format to another, or to move from one storage medium to another, or to switch from one user interface to another. We also took an in-depth look at scientific visualization and its underlying concepts, current visualization systems, their contributions, and their shortcomings. The role of data visualization is to stimulate mental processes different from quantitative data analysis, such as the perception of spatial relationships or the discovery of patterns or anomalies while browsing through large data sets. Visualization often leads to an intuitive understanding of the meaning of data values and their relationships by sacrificing accuracy in interpreting the data values. In order to be accurate in the interpretation, data values need to be measured, computed on, and compared to theoretical or empirical models (quantitative analysis). If visualization software hampers quantitative analysis (which happens with some commercial visualization products), its use is greatly diminished for astrophysical data analysis. The software system STAR (Scientific Toolkit for Astrophysical Research) was developed as a prototype during the course of the project to better understand the pragmatic concerns raised in the project. STAR led to a better understanding on the importance of collaboration between astrophysicists and computer scientists.
NASA Technical Reports Server (NTRS)
Matthews, Christine G.; Posenau, Mary-Anne; Leonard, Desiree M.; Avis, Elizabeth L.; Debure, Kelly R.; Stacy, Kathryn; Vonofenheim, Bill
1992-01-01
The intent is to provide an introduction to the image processing capabilities available at the Langley Research Center (LaRC) Central Scientific Computing Complex (CSCC). Various image processing software components are described. Information is given concerning the use of these components in the Data Visualization and Animation Laboratory at LaRC.
Jin, Hua; Xu, Guiping; Zhang, John X; Ye, Zuoer; Wang, Shufang; Zhao, Lun; Lin, Chong-De; Mo, Lei
2010-12-01
One basic question in brain plasticity research is whether individual life experience in the normal population can affect very early sensory-perceptual processing. Athletes provide a possible model to explore plasticity of the visual cortex as athletic training in confrontational ball games is quite often accompanied by training of the visual system. We asked professional badminton players to watch video clips related to their training experience and predict where the ball would land and examined whether they differed from non-player controls in the elicited C1, a visual evoked potential indexing V1 activity. Compared with controls, the players made judgments significantly more accurately, albeit not faster. An early ERP component peaking around 65 ms post-stimulus with a scalp topography centering at the occipital pole (electrode Oz) was observed in both groups and interpreted as the C1 component. With comparable latency, amplitudes of this component were significantly enhanced for the players than for the non-players, suggesting that it can be modulated by long-term physical training. The results present a clear case of experience-induced brain plasticity in primary visual cortex for very early sensory processing. Copyright © 2010 Elsevier B.V. All rights reserved.
Kometer, Michael; Cahn, B Rael; Andel, David; Carter, Olivia L; Vollenweider, Franz X
2011-03-01
Recent findings suggest that the serotonergic system and particularly the 5-HT2A/1A receptors are implicated in visual processing and possibly the pathophysiology of visual disturbances including hallucinations in schizophrenia and Parkinson's disease. To investigate the role of 5-HT2A/1A receptors in visual processing the effect of the hallucinogenic 5-HT2A/1A agonist psilocybin (125 and 250 μg/kg vs. placebo) on the spatiotemporal dynamics of modal object completion was assessed in normal volunteers (n = 17) using visual evoked potential recordings in conjunction with topographic-mapping and source analysis. These effects were then considered in relation to the subjective intensity of psilocybin-induced visual hallucinations quantified by psychometric measurement. Psilocybin dose-dependently decreased the N170 and, in contrast, slightly enhanced the P1 component selectively over occipital electrode sites. The decrease of the N170 was most apparent during the processing of incomplete object figures. Moreover, during the time period of the N170, the overall reduction of the activation in the right extrastriate and posterior parietal areas correlated positively with the intensity of visual hallucinations. These results suggest a central role of the 5-HT2A/1A-receptors in the modulation of visual processing. Specifically, a reduced N170 component was identified as potentially reflecting a key process of 5-HT2A/1A receptor-mediated visual hallucinations and aberrant modal object completion potential. Copyright © 2011 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Li, Sara Tze Kwan; Hsiao, Janet Hui-Wen
2018-07-01
Music notation and English word reading both involve mapping horizontally arranged visual components to components in sound, in contrast to reading in logographic languages such as Chinese. Accordingly, music-reading expertise may influence English word processing more than Chinese character processing. Here we showed that musicians named English words significantly faster than non-musicians when words were presented in the left visual field/right hemisphere (RH) or the center position, suggesting an advantage of RH processing due to music reading experience. This effect was not observed in Chinese character naming. A follow-up ERP study showed that in a sequential matching task, musicians had reduced RH N170 responses to English non-words under the processing of musical segments as compared with non-musicians, suggesting a shared visual processing mechanism in the RH between music notation and English non-word reading. This shared mechanism may be related to the letter-by-letter, serial visual processing that characterizes RH English word recognition (e.g., Lavidor & Ellis, 2001), which may consequently facilitate English word processing in the RH in musicians. Thus, music reading experience may have differential influences on the processing of different languages, depending on their similarities in the cognitive processes involved. Copyright © 2018 Elsevier B.V. All rights reserved.
Normal aging delays and compromises early multifocal visual attention during object tracking.
Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman
2013-02-01
Declines in selective attention are one of the sources contributing to age-related impairments in a broad range of cognitive functions. Most previous research on mechanisms underlying older adults' selection deficits has studied the deployment of visual attention to static objects and features. Here we investigate neural correlates of age-related differences in spatial attention to multiple objects as they move. We used a multiple object tracking task, in which younger and older adults were asked to keep track of moving target objects that moved randomly in the visual field among irrelevant distractor objects. By recording the brain's electrophysiological responses during the tracking period, we were able to delineate neural processing for targets and distractors at early stages of visual processing (~100-300 msec). Older adults showed less selective attentional modulation in the early phase of the visual P1 component (100-125 msec) than younger adults, indicating that early selection is compromised in old age. However, with a 25-msec delay relative to younger adults, older adults showed distinct processing of targets (125-150 msec), that is, a delayed yet intact attentional modulation. The magnitude of this delayed attentional modulation was related to tracking performance in older adults. The amplitude of the N1 component (175-210 msec) was smaller in older adults than in younger adults, and the target amplification effect of this component was also smaller in older relative to younger adults. Overall, these results indicate that normal aging affects the efficiency and timing of early visual processing during multiple object tracking.
Procedures for precap visual inspection
NASA Technical Reports Server (NTRS)
1984-01-01
Screening procedures for the final precap visual inspection of microcircuits used in electronic system components are described as an aid in training personnel unfamiliar with microcircuits. Processing techniques used in industry for the manufacture of monolithic and hybrid components are presented and imperfections that may be encountered during this inspection are discussed. Problem areas such as scratches, voids, adhesions, and wire bonding are illustrated by photomicrographs. This guide can serve as an effective tool in training personnel to perform precap visual inspections efficiently and reliably.
The Role of Attention in Item-Item Binding in Visual Working Memory
ERIC Educational Resources Information Center
Peterson, Dwight J.; Naveh-Benjamin, Moshe
2017-01-01
An important yet unresolved question regarding visual working memory (VWM) relates to whether or not binding processes within VWM require additional attentional resources compared with processing solely the individual components comprising these bindings. Previous findings indicate that binding of surface features (e.g., colored shapes) within VWM…
Does bimodal stimulus presentation increase ERP components usable in BCIs?
NASA Astrophysics Data System (ADS)
Thurlings, Marieke E.; Brouwer, Anne-Marie; Van Erp, Jan B. F.; Blankertz, Benjamin; Werkhoven, Peter J.
2012-08-01
Event-related potential (ERP)-based brain-computer interfaces (BCIs) employ differences in brain responses to attended and ignored stimuli. Typically, visual stimuli are used. Tactile stimuli have recently been suggested as a gaze-independent alternative. Bimodal stimuli could evoke additional brain activity due to multisensory integration which may be of use in BCIs. We investigated the effect of visual-tactile stimulus presentation on the chain of ERP components, BCI performance (classification accuracies and bitrates) and participants’ task performance (counting of targets). Ten participants were instructed to navigate a visual display by attending (spatially) to targets in sequences of either visual, tactile or visual-tactile stimuli. We observe that attending to visual-tactile (compared to either visual or tactile) stimuli results in an enhanced early ERP component (N1). This bimodal N1 may enhance BCI performance, as suggested by a nonsignificant positive trend in offline classification accuracies. A late ERP component (P300) is reduced when attending to visual-tactile compared to visual stimuli, which is consistent with the nonsignificant negative trend of participants’ task performance. We discuss these findings in the light of affected spatial attention at high-level compared to low-level stimulus processing. Furthermore, we evaluate bimodal BCIs from a practical perspective and for future applications.
Marini, Francesco; Marzi, Carlo A.
2016-01-01
The visual system leverages organizational regularities of perceptual elements to create meaningful representations of the world. One clear example of such function, which has been formalized in the Gestalt psychology principles, is the perceptual grouping of simple visual elements (e.g., lines and arcs) into unitary objects (e.g., forms and shapes). The present study sought to characterize automatic attentional capture and related cognitive processing of Gestalt-like visual stimuli at the psychophysiological level by using event-related potentials (ERPs). We measured ERPs during a simple visual reaction time task with bilateral presentations of physically matched elements with or without a Gestalt organization. Results showed that Gestalt (vs. non-Gestalt) stimuli are characterized by a larger N2pc together with enhanced ERP amplitudes of non-lateralized components (N1, N2, P3) starting around 150 ms post-stimulus onset. Thus, we conclude that Gestalt stimuli capture attention automatically and entail characteristic psychophysiological signatures at both early and late processing stages. Highlights We studied the neural signatures of the automatic processes of visual attention elicited by Gestalt stimuli. We found that a reliable early correlate of attentional capture turned out to be the N2pc component. Perceptual and cognitive processing of Gestalt stimuli is associated with larger N1, N2, and P3 PMID:27630555
A Neural Marker of Medical Visual Expertise: Implications for Training
ERIC Educational Resources Information Center
Rourke, Liam; Cruikshank, Leanna C.; Shapke, Larissa; Singhal, Anthony
2016-01-01
Researchers have identified a component of the EEG that discriminates visual experts from novices. The marker indexes a comprehensive model of visual processing, and if it is apparent in physicians, it could be used to investigate the development and training of their visual expertise. The purpose of this study was to determine whether a neural…
Improving Scores on Computerized Reading Assessments: The Effects of Colored Overlay Use
ERIC Educational Resources Information Center
Adams, Tracy A.
2012-01-01
Visual stress is a perceptual dysfunction that appears to affect how information is processed as it passes from the eyes to the brain. Photophobia, visual resolution, restricted focus, sustaining focus, and depth perception are all components of visual stress. Because visual stress affects what is perceived by the eye, students with this disorder…
Dual processing of visual rotation for bipedal stance control.
Day, Brian L; Muller, Timothy; Offord, Joanna; Di Giulio, Irene
2016-10-01
When standing, the gain of the body-movement response to a sinusoidally moving visual scene has been shown to get smaller with faster stimuli, possibly through changes in the apportioning of visual flow to self-motion or environment motion. We investigated whether visual-flow speed similarly influences the postural response to a discrete, unidirectional rotation of the visual scene in the frontal plane. Contrary to expectation, the evoked postural response consisted of two sequential components with opposite relationships to visual motion speed. With faster visual rotation the early component became smaller, not through a change in gain but by changes in its temporal structure, while the later component grew larger. We propose that the early component arises from the balance control system minimising apparent self-motion, while the later component stems from the postural system realigning the body with gravity. The source of visual motion is inherently ambiguous such that movement of objects in the environment can evoke self-motion illusions and postural adjustments. Theoretically, the brain can mitigate this problem by combining visual signals with other types of information. A Bayesian model that achieves this was previously proposed and predicts a decreasing gain of postural response with increasing visual motion speed. Here we test this prediction for discrete, unidirectional, full-field visual rotations in the frontal plane of standing subjects. The speed (0.75-48 deg s(-1) ) and direction of visual rotation was pseudo-randomly varied and mediolateral responses were measured from displacements of the trunk and horizontal ground reaction forces. The behaviour evoked by this visual rotation was more complex than has hitherto been reported, consisting broadly of two consecutive components with respective latencies of ∼190 ms and >0.7 s. Both components were sensitive to visual rotation speed, but with diametrically opposite relationships. Thus, the early component decreased with faster visual rotation, while the later component increased. Furthermore, the decrease in size of the early component was not achieved by a simple attenuation of gain, but by a change in its temporal structure. We conclude that the two components represent expressions of different motor functions, both pertinent to the control of bipedal stance. We propose that the early response stems from the balance control system attempting to minimise unintended body motion, while the later response arises from the postural control system attempting to align the body with gravity. © 2016 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.
Stone, David B.; Urrea, Laura J.; Aine, Cheryl J.; Bustillo, Juan R.; Clark, Vincent P.; Stephen, Julia M.
2011-01-01
In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. PMID:21807011
Visualization techniques to aid in the analysis of multispectral astrophysical data sets
NASA Technical Reports Server (NTRS)
Brugel, E. W.; Domik, Gitta O.; Ayres, T. R.
1993-01-01
The goal of this project was to support the scientific analysis of multi-spectral astrophysical data by means of scientific visualization. Scientific visualization offers its greatest value if it is not used as a method separate or alternative to other data analysis methods but rather in addition to these methods. Together with quantitative analysis of data, such as offered by statistical analysis, image or signal processing, visualization attempts to explore all information inherent in astrophysical data in the most effective way. Data visualization is one aspect of data analysis. Our taxonomy as developed in Section 2 includes identification and access to existing information, preprocessing and quantitative analysis of data, visual representation and the user interface as major components to the software environment of astrophysical data analysis. In pursuing our goal to provide methods and tools for scientific visualization of multi-spectral astrophysical data, we therefore looked at scientific data analysis as one whole process, adding visualization tools to an already existing environment and integrating the various components that define a scientific data analysis environment. As long as the software development process of each component is separate from all other components, users of data analysis software are constantly interrupted in their scientific work in order to convert from one data format to another, or to move from one storage medium to another, or to switch from one user interface to another. We also took an in-depth look at scientific visualization and its underlying concepts, current visualization systems, their contributions and their shortcomings. The role of data visualization is to stimulate mental processes different from quantitative data analysis, such as the perception of spatial relationships or the discovery of patterns or anomalies while browsing through large data sets. Visualization often leads to an intuitive understanding of the meaning of data values and their relationships by sacrificing accuracy in interpreting the data values. In order to be accurate in the interpretation, data values need to be measured, computed on, and compared to theoretical or empirical models (quantitative analysis). If visualization software hampers quantitative analysis (which happens with some commercial visualization products), its use is greatly diminished for astrophysical data analysis. The software system STAR (Scientific Toolkit for Astrophysical Research) was developed as a prototype during the course of the project to better understand the pragmatic concerns raised in the project. STAR led to a better understanding on the importance of collaboration between astrophysicists and computer scientists. Twenty-one examples of the use of visualization for astrophysical data are included with this report. Sixteen publications related to efforts performed during or initiated through work on this project are listed at the end of this report.
AstroVis: Visualizing astronomical data cubes
NASA Astrophysics Data System (ADS)
Finniss, Stephen; Tyler, Robin; Questiaux, Jacques
2016-08-01
AstroVis enables rapid visualization of large data files on platforms supporting the OpenGL rendering library. Radio astronomical observations are typically three dimensional and stored as data cubes. AstroVis implements a scalable approach to accessing these files using three components: a File Access Component (FAC) that reduces the impact of reading time, which speeds up access to the data; the Image Processing Component (IPC), which breaks up the data cube into smaller pieces that can be processed locally and gives a representation of the whole file; and Data Visualization, which implements an approach of Overview + Detail to reduces the dimensions of the data being worked with and the amount of memory required to store it. The result is a 3D display paired with a 2D detail display that contains a small subsection of the original file in full resolution without reducing the data in any way.
Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel
2012-01-01
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200–250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components. PMID:22363479
Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel
2012-01-01
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200-250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components.
Al-Marri, Faraj; Reza, Faruque; Begum, Tahamina; Hitam, Wan Hazabbah Wan; Jin, Goh Khean; Xiang, Jing
2017-10-25
Visual cognitive function is important to build up executive function in daily life. Perception of visual Number form (e.g., Arabic digit) and numerosity (magnitude of the Number) is of interest to cognitive neuroscientists. Neural correlates and the functional measurement of Number representations are complex occurrences when their semantic categories are assimilated with other concepts of shape and colour. Colour perception can be processed further to modulate visual cognition. The Ishihara pseudoisochromatic plates are one of the best and most common screening tools for basic red-green colour vision testing. However, there is a lack of study of visual cognitive function assessment using these pseudoisochromatic plates. We recruited 25 healthy normal trichromat volunteers and extended these studies using a 128-sensor net to record event-related EEG. Subjects were asked to respond by pressing Numbered buttons when they saw the Number and Non-number plates of the Ishihara colour vision test. Amplitudes and latencies of N100 and P300 event related potential (ERP) components were analysed from 19 electrode sites in the international 10-20 system. A brain topographic map, cortical activation patterns and Granger causation (effective connectivity) were analysed from 128 electrode sites. No major significant differences between N100 ERP components in either stimulus indicate early selective attention processing was similar for Number and Non-number plate stimuli, but Non-number plate stimuli evoked significantly higher amplitudes, longer latencies of the P300 ERP component with a slower reaction time compared to Number plate stimuli imply the allocation of attentional load was more in Non-number plate processing. A different pattern of asymmetric scalp voltage map was noticed for P300 components with a higher intensity in the left hemisphere for Number plate tasks and higher intensity in the right hemisphere for Non-number plate tasks. Asymmetric cortical activation and connectivity patterns revealed that Number recognition occurred in the occipital and left frontal areas where as the consequence was limited to the occipital area during the Non-number plate processing. Finally, the results displayed that the visual recognition of Numbers dissociates from the recognition of Non-numbers at the level of defined neural networks. Number recognition was not only a process of visual perception and attention, but it was also related to a higher level of cognitive function, that of language.
Components of Attention in Grapheme-Color Synesthesia: A Modeling Approach.
Ásgeirsson, Árni Gunnar; Nordfang, Maria; Sørensen, Thomas Alrik
2015-01-01
Grapheme-color synesthesia is a condition where the perception of graphemes consistently and automatically evokes an experience of non-physical color. Many have studied how synesthesia affects the processing of achromatic graphemes, but less is known about the synesthetic processing of physically colored graphemes. Here, we investigated how the visual processing of colored letters is affected by the congruence or incongruence of synesthetic grapheme-color associations. We briefly presented graphemes (10-150 ms) to 9 grapheme-color synesthetes and to 9 control observers. Their task was to report as many letters (targets) as possible, while ignoring digit (distractors). Graphemes were either congruently or incongruently colored with the synesthetes' reported grapheme-color association. A mathematical model, based on Bundesen's (1990) Theory of Visual Attention (TVA), was fitted to each observer's data, allowing us to estimate discrete components of visual attention. The models suggested that the synesthetes processed congruent letters faster than incongruent ones, and that they were able to retain more congruent letters in visual short-term memory, while the control group's model parameters were not significantly affected by congruence. The increase in processing speed, when synesthetes process congruent letters, suggests that synesthesia affects the processing of letters at a perceptual level. To account for the benefit in processing speed, we propose that synesthetic associations become integrated into the categories of graphemes, and that letter colors are considered as evidence for making certain perceptual categorizations in the visual system. We also propose that enhanced visual short-term memory capacity for congruently colored graphemes can be explained by the synesthetes' expertise regarding their specific grapheme-color associations.
ERIC Educational Resources Information Center
Bogon, Johanna; Finke, Kathrin; Schulte-Körne, Gerd; Müller, Hermann J.; Schneider, Werner X.; Stenneken, Prisca
2014-01-01
People with developmental dyslexia (DD) have been shown to be impaired in tasks that require the processing of multiple visual elements in parallel. It has been suggested that this deficit originates from disturbed visual attentional functions. The parameter-based assessment of visual attention based on Bundesen's (1990) theory of visual…
Amsel, Ben D; Kutas, Marta; Coulson, Seana
2017-10-01
In grapheme-color synesthesia, seeing particular letters or numbers evokes the experience of specific colors. We investigate the brain's real-time processing of words in this population by recording event-related brain potentials (ERPs) from 15 grapheme-color synesthetes and 15 controls as they judged the validity of word pairs ('yellow banana' vs. 'blue banana') presented under high and low visual contrast. Low contrast words elicited delayed P1/N170 visual ERP components in both groups, relative to high contrast. When color concepts were conveyed to synesthetes by individually tailored achromatic grapheme strings ('55555 banana'), visual contrast effects were like those in color words: P1/N170 components were delayed but unchanged in amplitude. When controls saw equivalent colored grapheme strings, visual contrast modulated P1/N170 amplitude but not latency. Color induction in synesthetes thus differs from color perception in controls. Independent from experimental effects, all orthographic stimuli elicited larger N170 and P2 in synesthetes than controls. While P2 (150-250ms) enhancement was similar in all synesthetes, N170 (130-210ms) amplitude varied with individual differences in synesthesia and visual imagery. Results suggest immediate cross-activation in visual areas processing color and shape is most pronounced in so-called projector synesthetes whose concurrent colors are experienced as originating in external space.
Neuropsychological Component of Imagery Processing
1991-01-25
and von Bonin, G. (1951). The Isocortex of Man. Urbana, IL: University of Illinois Press. Bauer, R. M., and Rubens, A. B. (1985). Agnosia . In K. M...Apperceptive agnosia : the specification and description of constructs. In Humphreys, G. W., and Riddoch, M. J. (1987a) (Eds.). Visual Object Processing: A...visual processing: agnosias , achromatopsia, Balint’s syndrome and related difficulties of orientation and construction. In M.-M. Mesulam (Ed
Prefrontal Neuronal Responses during Audiovisual Mnemonic Processing
Hwang, Jaewon
2015-01-01
During communication we combine auditory and visual information. Neurophysiological research in nonhuman primates has shown that single neurons in ventrolateral prefrontal cortex (VLPFC) exhibit multisensory responses to faces and vocalizations presented simultaneously. However, whether VLPFC is also involved in maintaining those communication stimuli in working memory or combining stored information across different modalities is unknown, although its human homolog, the inferior frontal gyrus, is known to be important in integrating verbal information from auditory and visual working memory. To address this question, we recorded from VLPFC while rhesus macaques (Macaca mulatta) performed an audiovisual working memory task. Unlike traditional match-to-sample/nonmatch-to-sample paradigms, which use unimodal memoranda, our nonmatch-to-sample task used dynamic movies consisting of both facial gestures and the accompanying vocalizations. For the nonmatch conditions, a change in the auditory component (vocalization), the visual component (face), or both components was detected. Our results show that VLPFC neurons are activated by stimulus and task factors: while some neurons simply responded to a particular face or a vocalization regardless of the task period, others exhibited activity patterns typically related to working memory such as sustained delay activity and match enhancement/suppression. In addition, we found neurons that detected the component change during the nonmatch period. Interestingly, some of these neurons were sensitive to the change of both components and therefore combined information from auditory and visual working memory. These results suggest that VLPFC is not only involved in the perceptual processing of faces and vocalizations but also in their mnemonic processing. PMID:25609614
Evolutionary relevance facilitates visual information processing.
Jackson, Russell E; Calvillo, Dusti P
2013-11-03
Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.
Lau, Johnny King L; Humphreys, Glyn W; Douis, Hassan; Balani, Alex; Bickerton, Wai-Ling; Rotshtein, Pia
2015-01-01
We report a lesion-symptom mapping analysis of visual speech production deficits in a large group (280) of stroke patients at the sub-acute stage (<120 days post-stroke). Performance on object naming was evaluated alongside three other tests of visual speech production, namely sentence production to a picture, sentence reading and nonword reading. A principal component analysis was performed on all these tests' scores and revealed a 'shared' component that loaded across all the visual speech production tasks and a 'unique' component that isolated object naming from the other three tasks. Regions for the shared component were observed in the left fronto-temporal cortices, fusiform gyrus and bilateral visual cortices. Lesions in these regions linked to both poor object naming and impairment in general visual-speech production. On the other hand, the unique naming component was potentially associated with the bilateral anterior temporal poles, hippocampus and cerebellar areas. This is in line with the models proposing that object naming relies on a left-lateralised language dominant system that interacts with a bilateral anterior temporal network. Neuropsychological deficits in object naming can reflect both the increased demands specific to the task and the more general difficulties in language processing.
Korinth, Sebastian Peter; Sommer, Werner; Breznitz, Zvia
2012-01-01
Little is known about the relationship of reading speed and early visual processes in normal readers. Here we examined the association of the early P1, N170 and late N1 component in visual event-related potentials (ERPs) with silent reading speed and a number of additional cognitive skills in a sample of 52 adult German readers utilizing a Lexical Decision Task (LDT) and a Face Decision Task (FDT). Amplitudes of the N170 component in the LDT but, interestingly, also in the FDT correlated with behavioral tests measuring silent reading speed. We suggest that reading speed performance can be at least partially accounted for by the extraction of essential structural information from visual stimuli, consisting of a domain-general and a domain-specific expertise-based portion. © 2011 Elsevier Inc. All rights reserved.
Honeybees in a virtual reality environment learn unique combinations of colour and shape.
Rusch, Claire; Roth, Eatai; Vinauger, Clément; Riffell, Jeffrey A
2017-10-01
Honeybees are well-known models for the study of visual learning and memory. Whereas most of our knowledge of learned responses comes from experiments using free-flying bees, a tethered preparation would allow fine-scale control of the visual stimuli as well as accurate characterization of the learned responses. Unfortunately, conditioning procedures using visual stimuli in tethered bees have been limited in their efficacy. In this study, using a novel virtual reality environment and a differential training protocol in tethered walking bees, we show that the majority of honeybees learn visual stimuli, and need only six paired training trials to learn the stimulus. We found that bees readily learn visual stimuli that differ in both shape and colour. However, bees learn certain components over others (colour versus shape), and visual stimuli are learned in a non-additive manner with the interaction of specific colour and shape combinations being crucial for learned responses. To better understand which components of the visual stimuli the bees learned, the shape-colour association of the stimuli was reversed either during or after training. Results showed that maintaining the visual stimuli in training and testing phases was necessary to elicit visual learning, suggesting that bees learn multiple components of the visual stimuli. Together, our results demonstrate a protocol for visual learning in restrained bees that provides a powerful tool for understanding how components of a visual stimulus elicit learned responses as well as elucidating how visual information is processed in the honeybee brain. © 2017. Published by The Company of Biologists Ltd.
Weng, Xiaoqian; Li, Guangze; Li, Rongbao
2016-08-01
This study examined the mediating role of working memory (WM) in the relation between rapid automatized naming (RAN) and Chinese reading comprehension. Three tasks assessing differentially visual and verbal components of WM were programmed by E-prime 2.0. Data collected from 55 Chinese college students were analyzed using correlations and hierarchical regression methods to determine the connection among RAN, reading comprehension, and WM components. Results showed that WM played a significant mediating role in the RAN-reading relation and that auditory WM made stronger contributions than visual WM. Taking into account of the multi-component nature of WM and the specificity of Chinese reading processing, this study discussed the mediating powers of the WM components, particularly auditory WM, further clarifying the possible components involved in the RAN-reading relation and thus providing some insight into the complicated Chinese reading process.
Toward a New Theory for Selecting Instructional Visuals.
ERIC Educational Resources Information Center
Croft, Richard S.; Burton, John K.
This paper provides a rationale for the selection of illustrations and visual aids for the classroom. The theories that describe the processing of visuals are dual coding theory and cue summation theory. Concept attainment theory offers a basis for selecting which cues are relevant for any learning task which includes a component of identification…
Evidence for Two Attentional Components in Visual Working Memory
ERIC Educational Resources Information Center
Allen, Richard J.; Baddeley, Alan D.; Hitch, Graham J.
2014-01-01
How does executive attentional control contribute to memory for sequences of visual objects, and what does this reveal about storage and processing in working memory? Three experiments examined the impact of a concurrent executive load (backward counting) on memory for sequences of individually presented visual objects. Experiments 1 and 2 found…
Using a virtual world for robot planning
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Monaco, John V.; Lin, Yixia; Funk, Christopher; Lyons, Damian
2012-06-01
We are building a robot cognitive architecture that constructs a real-time virtual copy of itself and its environment, including people, and uses the model to process perceptual information and to plan its movements. This paper describes the structure of this architecture. The software components of this architecture include PhysX for the virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture that controls the perceptual processing and task planning. The RS (Robot Schemas) language is implemented in Soar, providing the ability to reason about concurrency and time. This Soar/RS component controls visual processing, deciding which objects and dynamics to render into PhysX, and the degree of detail required for the task. As the robot runs, its virtual model diverges from physical reality, and errors grow. The Match-Mediated Difference component monitors these errors by comparing the visual data with corresponding data from virtual cameras, and notifies Soar/RS of significant differences, e.g. a new object that appears, or an object that changes direction unexpectedly. Soar/RS can then run PhysX much faster than real-time and search among possible future world paths to plan the robot's actions. We report experimental results in indoor environments.
Visual and Auditory Components in the Perception of Asynchronous Audiovisual Speech
Alcalá-Quintana, Rocío
2015-01-01
Research on asynchronous audiovisual speech perception manipulates experimental conditions to observe their effects on synchrony judgments. Probabilistic models establish a link between the sensory and decisional processes underlying such judgments and the observed data, via interpretable parameters that allow testing hypotheses and making inferences about how experimental manipulations affect such processes. Two models of this type have recently been proposed, one based on independent channels and the other using a Bayesian approach. Both models are fitted here to a common data set, with a subsequent analysis of the interpretation they provide about how experimental manipulations affected the processes underlying perceived synchrony. The data consist of synchrony judgments as a function of audiovisual offset in a speech stimulus, under four within-subjects manipulations of the quality of the visual component. The Bayesian model could not accommodate asymmetric data, was rejected by goodness-of-fit statistics for 8/16 observers, and was found to be nonidentifiable, which renders uninterpretable parameter estimates. The independent-channels model captured asymmetric data, was rejected for only 1/16 observers, and identified how sensory and decisional processes mediating asynchronous audiovisual speech perception are affected by manipulations that only alter the quality of the visual component of the speech signal. PMID:27551361
Mo, Lei; Xu, Guiping; Kay, Paul; Tan, Li-Hai
2011-01-01
Previous studies have shown that the effect of language on categorical perception of color is stronger when stimuli are presented in the right visual field than in the left. To examine whether this lateralized effect occurs preattentively at an early stage of processing, we monitored the visual mismatch negativity, which is a component of the event-related potential of the brain to an unfamiliar stimulus among a temporally presented series of stimuli. In the oddball paradigm we used, the deviant stimuli were unrelated to the explicit task. A significant interaction between color-pair type (within-category vs. between-category) and visual field (left vs. right) was found. The amplitude of the visual mismatch negativity component evoked by the within-category deviant was significantly smaller than that evoked by the between-category deviant when displayed in the right visual field, but no such difference was observed for the left visual field. This result constitutes electroencephalographic evidence that the lateralized Whorf effect per se occurs out of awareness and at an early stage of processing. PMID:21844340
Stone, David B; Urrea, Laura J; Aine, Cheryl J; Bustillo, Juan R; Clark, Vincent P; Stephen, Julia M
2011-10-01
In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. Copyright © 2011 Elsevier Ltd. All rights reserved.
Perceived Synchrony of Frog Multimodal Signal Components Is Influenced by Content and Order.
Taylor, Ryan C; Page, Rachel A; Klein, Barrett A; Ryan, Michael J; Hunter, Kimberly L
2017-10-01
Multimodal signaling is common in communication systems. Depending on the species, individual signal components may be produced synchronously as a result of physiological constraint (fixed) or each component may be produced independently (fluid) in time. For animals that rely on fixed signals, a basic prediction is that asynchrony between the components should degrade the perception of signal salience, reducing receiver response. Male túngara frogs, Physalaemus pustulosus, produce a fixed multisensory courtship signal by vocalizing with two call components (whines and chucks) and inflating a vocal sac (visual component). Using a robotic frog, we tested female responses to variation in the temporal arrangement between acoustic and visual components. When the visual component lagged a complex call (whine + chuck), females largely rejected this asynchronous multisensory signal in favor of the complex call absent the visual cue. When the chuck component was removed from one call, but the robofrog inflation lagged the complex call, females responded strongly to the asynchronous multimodal signal. When the chuck component was removed from both calls, females reversed preference and responded positively to the asynchronous multisensory signal. When the visual component preceded the call, females responded as often to the multimodal signal as to the call alone. These data show that asynchrony of a normally fixed signal does reduce receiver responsiveness. The magnitude and overall response, however, depend on specific temporal interactions between the acoustic and visual components. The sensitivity of túngara frogs to lagging visual cues, but not leading ones, and the influence of acoustic signal content on the perception of visual asynchrony is similar to those reported in human psychophysics literature. Virtually all acoustically communicating animals must conduct auditory scene analyses and identify the source of signals. Our data suggest that some basic audiovisual neural integration processes may be at work in the vertebrate brain. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology 2017. This work is written by US Government employees and is in the public domain in the US.
Components of Attention in Grapheme-Color Synesthesia: A Modeling Approach
Ásgeirsson, Árni Gunnar; Nordfang, Maria; Sørensen, Thomas Alrik
2015-01-01
Grapheme-color synesthesia is a condition where the perception of graphemes consistently and automatically evokes an experience of non-physical color. Many have studied how synesthesia affects the processing of achromatic graphemes, but less is known about the synesthetic processing of physically colored graphemes. Here, we investigated how the visual processing of colored letters is affected by the congruence or incongruence of synesthetic grapheme-color associations. We briefly presented graphemes (10–150 ms) to 9 grapheme-color synesthetes and to 9 control observers. Their task was to report as many letters (targets) as possible, while ignoring digit (distractors). Graphemes were either congruently or incongruently colored with the synesthetes’ reported grapheme-color association. A mathematical model, based on Bundesen’s (1990) Theory of Visual Attention (TVA), was fitted to each observer’s data, allowing us to estimate discrete components of visual attention. The models suggested that the synesthetes processed congruent letters faster than incongruent ones, and that they were able to retain more congruent letters in visual short-term memory, while the control group’s model parameters were not significantly affected by congruence. The increase in processing speed, when synesthetes process congruent letters, suggests that synesthesia affects the processing of letters at a perceptual level. To account for the benefit in processing speed, we propose that synesthetic associations become integrated into the categories of graphemes, and that letter colors are considered as evidence for making certain perceptual categorizations in the visual system. We also propose that enhanced visual short-term memory capacity for congruently colored graphemes can be explained by the synesthetes’ expertise regarding their specific grapheme-color associations. PMID:26252019
An ERP investigation of visual word recognition in syllabary scripts.
Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J
2013-06-01
The bimodal interactive-activation model has been successfully applied to understanding the neurocognitive processes involved in reading words in alphabetic scripts, as reflected in the modulation of ERP components in masked repetition priming. In order to test the generalizability of this approach, in the present study we examined word recognition in a different writing system, the Japanese syllabary scripts hiragana and katakana. Native Japanese participants were presented with repeated or unrelated pairs of Japanese words in which the prime and target words were both in the same script (within-script priming, Exp. 1) or were in the opposite script (cross-script priming, Exp. 2). As in previous studies with alphabetic scripts, in both experiments the N250 (sublexical processing) and N400 (lexical-semantic processing) components were modulated by priming, although the time course was somewhat delayed. The earlier N/P150 effect (visual feature processing) was present only in "Experiment 1: Within-script priming", in which the prime and target words shared visual features. Overall, the results provide support for the hypothesis that visual word recognition involves a generalizable set of neurocognitive processes that operate in similar manners across different writing systems and languages, as well as pointing to the viability of the bimodal interactive-activation framework for modeling such processes.
An ERP Investigation of Visual Word Recognition in Syllabary Scripts
Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J.
2013-01-01
The bi-modal interactive-activation model has been successfully applied to understanding the neuro-cognitive processes involved in reading words in alphabetic scripts, as reflected in the modulation of ERP components in masked repetition priming. In order to test the generalizability of this approach, the current study examined word recognition in a different writing system, the Japanese syllabary scripts Hiragana and Katakana. Native Japanese participants were presented with repeated or unrelated pairs of Japanese words where the prime and target words were both in the same script (within-script priming, Experiment 1) or were in the opposite script (cross-script priming, Experiment 2). As in previous studies with alphabetic scripts, in both experiments the N250 (sub-lexical processing) and N400 (lexical-semantic processing) components were modulated by priming, although the time-course was somewhat delayed. The earlier N/P150 effect (visual feature processing) was present only in Experiment 1 where prime and target words shared visual features. Overall, the results provide support for the hypothesis that visual word recognition involves a generalizable set of neuro-cognitive processes that operate in a similar manner across different writing systems and languages, as well as pointing to the viability of the bi-modal interactive activation framework for modeling such processes. PMID:23378278
[Working memory and work with memory: visual-spatial and further components of processing].
Velichkovsky, B M; Challis, B H; Pomplun, M
1995-01-01
Empirical and theoretical evidence for the concept of working memory is considered. We argue that the major weakness of this concept is its loose connection with the knowledge about background perceptive and cognitive processes. Results of two relevant experiments are provided. The first study demonstrated the classical chunking effect in a speeded visual search and comparison task, the proper domain of a large-capacity very short term sensory store. Our second study was a kind of extended levels-of-processing experiment. We attempted to manipulate visual, phonological, and (different) executive components of long-term memory in the hope of finding some systematic relationships between these forms of processing. Indeed, the results demonstrated a high degree of systematicity without any apparent need for a concept such as working memory for the explanation. Accordingly, the place for working memory is at all the interfaces where our metacognitive strategies interfere with mostly domain-specific cognitive mechanisms. Working memory is simply our work with memory.
ERIC Educational Resources Information Center
Welcome, Suzanne E.; Joanisse, Marc F.
2012-01-01
We used fMRI to examine patterns of brain activity associated with component processes of visual word recognition and their relationships to individual differences in reading skill. We manipulated both the judgments adults made on written stimuli and the characteristics of the stimuli. Phonological processing led to activation in left inferior…
Integrating visual learning within a model-based ATR system
NASA Astrophysics Data System (ADS)
Carlotto, Mark; Nebrich, Mark
2017-05-01
Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual learning in an ATR context requires the ability to recognize objects independent of location, scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate target locations. A bootstrap learning method effectively extends the operation of the classifier beyond the training set and provides a measure of confidence. We show how the classifier can be used to learn other features that are difficult to compute from imagery such as target direction, and to assess the performance of the visual learning process itself.
Cross-modal orienting of visual attention.
Hillyard, Steven A; Störmer, Viola S; Feng, Wenfeng; Martinez, Antigona; McDonald, John J
2016-03-01
This article reviews a series of experiments that combined behavioral and electrophysiological recording techniques to explore the hypothesis that salient sounds attract attention automatically and facilitate the processing of visual stimuli at the sound's location. This cross-modal capture of visual attention was found to occur even when the attracting sound was irrelevant to the ongoing task and was non-predictive of subsequent events. A slow positive component in the event-related potential (ERP) that was localized to the visual cortex was found to be closely coupled with the orienting of visual attention to a sound's location. This neural sign of visual cortex activation was predictive of enhanced perceptual processing and was paralleled by a desynchronization (blocking) of the ongoing occipital alpha rhythm. Further research is needed to determine the nature of the relationship between the slow positive ERP evoked by the sound and the alpha desynchronization and to understand how these electrophysiological processes contribute to improved visual-perceptual processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Visualization study of counterflow in superfluid 4He using metastable helium molecules.
Guo, W; Cahn, S B; Nikkel, J A; Vinen, W F; McKinsey, D N
2010-07-23
Heat is transferred in superfluid 4He via a process known as thermal counterflow. It has been known for many years that above a critical heat current the superfluid component in this counterflow becomes turbulent. It has been suspected that the normal-fluid component may become turbulent as well, but experimental verification is difficult without a technique for visualizing the flow. Here we report a series of visualization studies on the normal-fluid component in a thermal counterflow performed by imaging the motion of seeded metastable helium molecules using a laser-induced-fluorescence technique. We present evidence that the flow of the normal fluid is indeed turbulent at relatively large velocities. Thermal counterflow in which both components are turbulent presents us with a theoretically challenging type of turbulent behavior that is new to physics.
Visual Cortical Entrainment to Motion and Categorical Speech Features during Silent Lipreading
O’Sullivan, Aisling E.; Crosse, Michael J.; Di Liberto, Giovanni M.; Lalor, Edmund C.
2017-01-01
Speech is a multisensory percept, comprising an auditory and visual component. While the content and processing pathways of audio speech have been well characterized, the visual component is less well understood. In this work, we expand current methodologies using system identification to introduce a framework that facilitates the study of visual speech in its natural, continuous form. Specifically, we use models based on the unheard acoustic envelope (E), the motion signal (M) and categorical visual speech features (V) to predict EEG activity during silent lipreading. Our results show that each of these models performs similarly at predicting EEG in visual regions and that respective combinations of the individual models (EV, MV, EM and EMV) provide an improved prediction of the neural activity over their constituent models. In comparing these different combinations, we find that the model incorporating all three types of features (EMV) outperforms the individual models, as well as both the EV and MV models, while it performs similarly to the EM model. Importantly, EM does not outperform EV and MV, which, considering the higher dimensionality of the V model, suggests that more data is needed to clarify this finding. Nevertheless, the performance of EMV, and comparisons of the subject performances for the three individual models, provides further evidence to suggest that visual regions are involved in both low-level processing of stimulus dynamics and categorical speech perception. This framework may prove useful for investigating modality-specific processing of visual speech under naturalistic conditions. PMID:28123363
A Pervasive Parallel Processing Framework for Data Visualization and Analysis at Extreme Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Kwan-Liu
Most of today’s visualization libraries and applications are based off of what is known today as the visualization pipeline. In the visualization pipeline model, algorithms are encapsulated as “filtering” components with inputs and outputs. These components can be combined by connecting the outputs of one filter to the inputs of another filter. The visualization pipeline model is popular because it provides a convenient abstraction that allows users to combine algorithms in powerful ways. Unfortunately, the visualization pipeline cannot run effectively on exascale computers. Experts agree that the exascale machine will comprise processors that contain many cores. Furthermore, physical limitations willmore » prevent data movement in and out of the chip (that is, between main memory and the processing cores) from keeping pace with improvements in overall compute performance. To use these processors to their fullest capability, it is essential to carefully consider memory access. This is where the visualization pipeline fails. Each filtering component in the visualization library is expected to take a data set in its entirety, perform some computation across all of the elements, and output the complete results. The process of iterating over all elements must be repeated in each filter, which is one of the worst possible ways to traverse memory when trying to maximize the number of executions per memory access. This project investigates a new type of visualization framework that exhibits a pervasive parallelism necessary to run on exascale machines. Our framework achieves this by defining algorithms in terms of functors, which are localized, stateless operations. Functors can be composited in much the same way as filters in the visualization pipeline. But, functors’ design allows them to be concurrently running on massive amounts of lightweight threads. Only with such fine-grained parallelism can we hope to fill the billions of threads we expect will be necessary for efficient computation on an exascale computer. This project concludes with a functional prototype containing pervasively parallel algorithms that perform demonstratively well on many-core processors. These algorithms are fundamental for performing data analysis and visualization at extreme scale.« less
ERIC Educational Resources Information Center
Nobre, Alexandre de Pontes; de Salles, Jerusa Fumagalli
2016-01-01
The aim of this study was to investigate relations between lexical-semantic processing and two components of reading: visual word recognition and reading comprehension. Sixty-eight children from private schools in Porto Alegre, Brazil, from 7 to 12 years, were evaluated. Reading was assessed with a word/nonword reading task and a reading…
Drummond, Sean P A; Anderson, Dane E; Straus, Laura D; Vogel, Edward K; Perez, Veronica B
2012-01-01
Sleep deprivation has adverse consequences for a variety of cognitive functions. The exact effects of sleep deprivation, though, are dependent upon the cognitive process examined. Within working memory, for example, some component processes are more vulnerable to sleep deprivation than others. Additionally, the differential impacts on cognition of different types of sleep deprivation have not been well studied. The aim of this study was to examine the effects of one night of total sleep deprivation and 4 nights of partial sleep deprivation (4 hours in bed/night) on two components of visual working memory: capacity and filtering efficiency. Forty-four healthy young adults were randomly assigned to one of the two sleep deprivation conditions. All participants were studied: 1) in a well-rested condition (following 6 nights of 9 hours in bed/night); and 2) following sleep deprivation, in a counter-balanced order. Visual working memory testing consisted of two related tasks. The first measured visual working memory capacity and the second measured the ability to ignore distractor stimuli in a visual scene (filtering efficiency). Results showed neither type of sleep deprivation reduced visual working memory capacity. Partial sleep deprivation also generally did not change filtering efficiency. Total sleep deprivation, on the other hand, did impair performance in the filtering task. These results suggest components of visual working memory are differentially vulnerable to the effects of sleep deprivation, and different types of sleep deprivation impact visual working memory to different degrees. Such findings have implications for operational settings where individuals may need to perform with inadequate sleep and whose jobs involve receiving an array of visual information and discriminating the relevant from the irrelevant prior to making decisions or taking actions (e.g., baggage screeners, air traffic controllers, military personnel, health care providers).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dowson, Scott T.; Bruce, Joseph R.; Best, Daniel M.
2009-04-14
This paper presents key components of the Law Enforcement Information Framework (LEIF) that provides communications, situational awareness, and visual analytics tools in a service-oriented architecture supporting web-based desktop and handheld device users. LEIF simplifies interfaces and visualizations of well-established visual analytical techniques to improve usability. Advanced analytics capability is maintained by enhancing the underlying processing to support the new interface. LEIF development is driven by real-world user feedback gathered through deployments at three operational law enforcement organizations in the US. LEIF incorporates a robust information ingest pipeline supporting a wide variety of information formats. LEIF also insulates interface and analyticalmore » components from information sources making it easier to adapt the framework for many different data repositories.« less
Temporal resolution of orientation-defined texture segregation: a VEP study.
Lachapelle, Julie; McKerral, Michelle; Jauffret, Colin; Bach, Michael
2008-09-01
Orientation is one of the visual dimensions that subserve figure-ground discrimination. A spatial gradient in orientation leads to "texture segregation", which is thought to be concurrent parallel processing across the visual field, without scanning. In the visual-evoked potential (VEP) a component can be isolated which is related to texture segregation ("tsVEP"). Our objective was to evaluate the temporal frequency dependence of the tsVEP to compare processing speed of low-level features (e.g., orientation, using the VEP, here denoted llVEP) with texture segregation because of a recent literature controversy in that regard. Visual-evoked potentials (VEPs) were recorded in seven normal adults. Oriented line segments of 0.1 degrees x 0.8 degrees at 100% contrast were presented in four different arrangements: either oriented in parallel for two homogeneous stimuli (from which were obtained the low-level VEP (llVEP)) or with a 90 degrees orientation gradient for two textured ones (from which were obtained the texture VEP). The orientation texture condition was presented at eight different temporal frequencies ranging from 7.5 to 45 Hz. Fourier analysis was used to isolate low-level components at the pattern-change frequency and texture-segregation components at half that frequency. For all subjects, there was lower high-cutoff frequency for tsVEP than for llVEPs, on average 12 Hz vs. 17 Hz (P = 0.017). The results suggest that the processing of feature gradients to extract texture segregation requires additional processing time, resulting in a lower fusion frequency.
Attention distributed across sensory modalities enhances perceptual performance
Mishra, Jyoti; Gazzaley, Adam
2012-01-01
This study investigated the interaction between top-down attentional control and multisensory processing in humans. Using semantically congruent and incongruent audiovisual stimulus streams, we found target detection to be consistently improved in the setting of distributed audiovisual attention versus focused visual attention. This performance benefit was manifested as faster reaction times for congruent audiovisual stimuli, and as accuracy improvements for incongruent stimuli, resulting in a resolution of stimulus interference. Electrophysiological recordings revealed that these behavioral enhancements were associated with reduced neural processing of both auditory and visual components of the audiovisual stimuli under distributed vs. focused visual attention. These neural changes were observed at early processing latencies, within 100–300 ms post-stimulus onset, and localized to auditory, visual, and polysensory temporal cortices. These results highlight a novel neural mechanism for top-down driven performance benefits via enhanced efficacy of sensory neural processing during distributed audiovisual attention relative to focused visual attention. PMID:22933811
Frontal–Occipital Connectivity During Visual Search
Pantazatos, Spiro P.; Yanagihara, Ted K.; Zhang, Xian; Meitzler, Thomas
2012-01-01
Abstract Although expectation- and attention-related interactions between ventral and medial prefrontal cortex and stimulus category-selective visual regions have been identified during visual detection and discrimination, it is not known if similar neural mechanisms apply to other tasks such as visual search. The current work tested the hypothesis that high-level frontal regions, previously implicated in expectation and visual imagery of object categories, interact with visual regions associated with object recognition during visual search. Using functional magnetic resonance imaging, subjects searched for a specific object that varied in size and location within a complex natural scene. A model-free, spatial-independent component analysis isolated multiple task-related components, one of which included visual cortex, as well as a cluster within ventromedial prefrontal cortex (vmPFC), consistent with the engagement of both top-down and bottom-up processes. Analyses of psychophysiological interactions showed increased functional connectivity between vmPFC and object-sensitive lateral occipital cortex (LOC), and results from dynamic causal modeling and Bayesian Model Selection suggested bidirectional connections between vmPFC and LOC that were positively modulated by the task. Using image-guided diffusion-tensor imaging, functionally seeded, probabilistic white-matter tracts between vmPFC and LOC, which presumably underlie this effective interconnectivity, were also observed. These connectivity findings extend previous models of visual search processes to include specific frontal–occipital neuronal interactions during a natural and complex search task. PMID:22708993
Examining the cognitive demands of analogy instructions compared to explicit instructions.
Tse, Choi Yeung Andy; Wong, Andus; Whitehill, Tara; Ma, Estella; Masters, Rich
2016-10-01
In many learning domains, instructions are presented explicitly despite high cognitive demands associated with their processing. This study examined cognitive demands imposed on working memory by different types of instruction to speak with maximum pitch variation: visual analogy, verbal analogy and explicit verbal instruction. Forty participants were asked to memorise a set of 16 visual and verbal stimuli while reading aloud a Cantonese paragraph with maximum pitch variation. Instructions about how to achieve maximum pitch variation were presented via visual analogy, verbal analogy, explicit rules or no instruction. Pitch variation was assessed off-line, using standard deviation of fundamental frequency. Immediately after reading, participants recalled as many stimuli as possible. Analogy instructions resulted in significantly increased pitch variation compared to explicit instructions or no instructions. Explicit instructions resulted in poorest recall of stimuli. Visual analogy instructions resulted in significantly poorer recall of visual stimuli than verbal stimuli. The findings suggest that non-propositional instructions presented via analogy may be less cognitively demanding than instructions that are presented explicitly. Processing analogy instructions that are presented as a visual representation is likely to load primarily visuospatial components of working memory rather than phonological components. The findings are discussed with reference to speech therapy and human cognition.
Porcu, Emanuele; Keitel, Christian; Müller, Matthias M
2013-11-27
We investigated effects of inter-modal attention on concurrent visual and tactile stimulus processing by means of stimulus-driven oscillatory brain responses, so-called steady-state evoked potentials (SSEPs). To this end, we frequency-tagged a visual (7.5Hz) and a tactile stimulus (20Hz) and participants were cued, on a trial-by-trial basis, to attend to either vision or touch to perform a detection task in the cued modality. SSEPs driven by the stimulation comprised stimulus frequency-following (i.e. fundamental frequency) as well as frequency-doubling (i.e. second harmonic) responses. We observed that inter-modal attention to vision increased amplitude and phase synchrony of the fundamental frequency component of the visual SSEP while the second harmonic component showed an increase in phase synchrony, only. In contrast, inter-modal attention to touch increased SSEP amplitude of the second harmonic but not of the fundamental frequency, while leaving phase synchrony unaffected in both responses. Our results show that inter-modal attention generally influences concurrent stimulus processing in vision and touch, thus, extending earlier audio-visual findings to a visuo-tactile stimulus situation. The pattern of results, however, suggests differences in the neural implementation of inter-modal attentional influences on visual vs. tactile stimulus processing. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Schindler, Sebastian; Kissler, Johanna
2016-10-01
Human brains spontaneously differentiate between various emotional and neutral stimuli, including written words whose emotional quality is symbolic. In the electroencephalogram (EEG), emotional-neutral processing differences are typically reflected in the early posterior negativity (EPN, 200-300 ms) and the late positive potential (LPP, 400-700 ms). These components are also enlarged by task-driven visual attention, supporting the assumption that emotional content naturally drives attention. Still, the spatio-temporal dynamics of interactions between emotional stimulus content and task-driven attention remain to be specified. Here, we examine this issue in visual word processing. Participants attended to negative, neutral, or positive nouns while high-density EEG was recorded. Emotional content and top-down attention both amplified the EPN component in parallel. On the LPP, by contrast, emotion and attention interacted: Explicit attention to emotional words led to a substantially larger amplitude increase than did explicit attention to neutral words. Source analysis revealed early parallel effects of emotion and attention in bilateral visual cortex and a later interaction of both in right visual cortex. Distinct effects of attention were found in inferior, middle and superior frontal, paracentral, and parietal areas, as well as in the anterior cingulate cortex (ACC). Results specify separate and shared mechanisms of emotion and attention at distinct processing stages. Hum Brain Mapp 37:3575-3587, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Tanaka, Hideaki
2016-01-01
Cosmetic makeup significantly influences facial perception. Because faces consist of similar physical structures, cosmetic makeup is typically used to highlight individual features, particularly those of the eyes (i.e., eye shadow) and mouth (i.e., lipstick). Though event-related potentials have been utilized to study various aspects of facial processing, the influence of cosmetics on specific ERP components remains unclear. The present study aimed to investigate the relationship between the application of cosmetic makeup and the amplitudes of the P1 and N170 event-related potential components during facial perception tasks. Moreover, the influence of visual perception on N170 amplitude, was evaluated under three makeup conditions: Eye Shadow, Lipstick, and No Makeup. Electroencephalography was used to monitor 17 participants who were exposed to visual stimuli under each these three makeup conditions. The results of the present study subsequently demonstrated that the Lipstick condition elicited a significantly greater N170 amplitude than the No Makeup condition, while P1 amplitude was unaffected by any of the conditions. Such findings indicate that the application of cosmetic makeup alters general facial perception but exerts no influence on the perception of low-level visual features. Collectively, these results support the notion that the application of makeup induces subtle alterations in the processing of facial stimuli, with a particular effect on the processing of specific facial components (i.e., the mouth), as reflected by changes in N170 amplitude.
Tanaka, Hideaki
2016-01-01
Cosmetic makeup significantly influences facial perception. Because faces consist of similar physical structures, cosmetic makeup is typically used to highlight individual features, particularly those of the eyes (i.e., eye shadow) and mouth (i.e., lipstick). Though event-related potentials have been utilized to study various aspects of facial processing, the influence of cosmetics on specific ERP components remains unclear. The present study aimed to investigate the relationship between the application of cosmetic makeup and the amplitudes of the P1 and N170 event-related potential components during facial perception tasks. Moreover, the influence of visual perception on N170 amplitude, was evaluated under three makeup conditions: Eye Shadow, Lipstick, and No Makeup. Electroencephalography was used to monitor 17 participants who were exposed to visual stimuli under each these three makeup conditions. The results of the present study subsequently demonstrated that the Lipstick condition elicited a significantly greater N170 amplitude than the No Makeup condition, while P1 amplitude was unaffected by any of the conditions. Such findings indicate that the application of cosmetic makeup alters general facial perception but exerts no influence on the perception of low-level visual features. Collectively, these results support the notion that the application of makeup induces subtle alterations in the processing of facial stimuli, with a particular effect on the processing of specific facial components (i.e., the mouth), as reflected by changes in N170 amplitude. PMID:27656161
Effects of symbol type and numerical distance on the human event-related potential.
Jiang, Ting; Qiao, Sibing; Li, Jin; Cao, Zhongyu; Gao, Xuefei; Song, Yan; Xue, Gui; Dong, Qi; Chen, Chuansheng
2010-01-01
This study investigated the influence of the symbol type and numerical distance of numbers on the amplitudes and peak latencies of event-related potentials (ERPs). Our aim was to (1) determine the point in time of magnitude information access in visual number processing; and (2) identify at what stage the advantage of Arabic digits over Chinese verbal numbers occur. ERPs were recorded from 64 scalp sites while subjects (n=26) performed a classification task. Results showed that larger ERP amplitudes were elicited by numbers with distance-close condition in comparison to distance-far condition in the VPP component over centro-frontal sites. Furthermore, the VPP latency varied as a function of the symbol type, but the N170 did not. Such results demonstrate that magnitude information access takes place as early as 150 ms after onset of visual number stimuli and the advantage of Arabic digits over verbal numbers should be localized to the VPP component. We establish the VPP component as a critical ERP component to report in studies of numerical cognition and our results call into question the N170/VPP association hypothesis and the serial-stage model of visual number comparison processing.
Zachau, Swantje; Korpilahti, Pirjo; Hämäläinen, Jarmo A; Ervast, Leena; Heinänen, Kaisu; Suominen, Kalervo; Lehtihalmes, Matti; Leppänen, Paavo H T
2014-07-01
We explored semantic integration mechanisms in native and non-native hearing users of sign language and non-signing controls. Event-related brain potentials (ERPs) were recorded while participants performed a semantic decision task for priming lexeme pairs. Pairs were presented either within speech or across speech and sign language. Target-related ERP responses were subjected to principal component analyses (PCA), and neurocognitive basis of semantic integration processes were assessed by analyzing the N400 and the late positive complex (LPC) components in response to spoken (auditory) and signed (visual) antonymic and unrelated targets. Semantically-related effects triggered across modalities would indicate a similar tight interconnection between the signers׳ two languages like that described for spoken language bilinguals. Remarkable structural similarity of the N400 and LPC components with varying group differences between the spoken and signed targets were found. The LPC was the dominant response. The controls׳ LPC differed from the LPC of the two signing groups. It was reduced to the auditory unrelated targets and was less frontal for all the visual targets. The visual LPC was more broadly distributed in native than non-native signers and was left-lateralized for the unrelated targets in the native hearing signers only. Semantic priming effects were found for the auditory N400 in all groups, but only native hearing signers revealed a clear N400 effect to the visual targets. Surprisingly, the non-native signers revealed no semantically-related processing effect to the visual targets reflected in the N400 or the LPC; instead they appeared to rely more on visual post-lexical analyzing stages than native signers. We conclude that native and non-native signers employed different processing strategies to integrate signed and spoken semantic content. It appeared that the signers׳ semantic processing system was affected by group-specific factors like language background and/or usage. Copyright © 2014 Elsevier Ltd. All rights reserved.
Tsotsos, John K.
2017-01-01
Much has been written about how the biological brain might represent and process visual information, and how this might inspire and inform machine vision systems. Indeed, tremendous progress has been made, and especially during the last decade in the latter area. However, a key question seems too often, if not mostly, be ignored. This question is simply: do proposed solutions scale with the reality of the brain's resources? This scaling question applies equally to brain and to machine solutions. A number of papers have examined the inherent computational difficulty of visual information processing using theoretical and empirical methods. The main goal of this activity had three components: to understand the deep nature of the computational problem of visual information processing; to discover how well the computational difficulty of vision matches to the fixed resources of biological seeing systems; and, to abstract from the matching exercise the key principles that lead to the observed characteristics of biological visual performance. This set of components was termed complexity level analysis in Tsotsos (1987) and was proposed as an important complement to Marr's three levels of analysis. This paper revisits that work with the advantage that decades of hindsight can provide. PMID:28848458
Tsotsos, John K
2017-01-01
Much has been written about how the biological brain might represent and process visual information, and how this might inspire and inform machine vision systems. Indeed, tremendous progress has been made, and especially during the last decade in the latter area. However, a key question seems too often, if not mostly, be ignored. This question is simply: do proposed solutions scale with the reality of the brain's resources? This scaling question applies equally to brain and to machine solutions. A number of papers have examined the inherent computational difficulty of visual information processing using theoretical and empirical methods. The main goal of this activity had three components: to understand the deep nature of the computational problem of visual information processing; to discover how well the computational difficulty of vision matches to the fixed resources of biological seeing systems; and, to abstract from the matching exercise the key principles that lead to the observed characteristics of biological visual performance. This set of components was termed complexity level analysis in Tsotsos (1987) and was proposed as an important complement to Marr's three levels of analysis. This paper revisits that work with the advantage that decades of hindsight can provide.
Visual form predictions facilitate auditory processing at the N1.
Paris, Tim; Kim, Jeesun; Davis, Chris
2017-02-20
Auditory-visual (AV) events often involve a leading visual cue (e.g. auditory-visual speech) that allows the perceiver to generate predictions about the upcoming auditory event. Electrophysiological evidence suggests that when an auditory event is predicted, processing is sped up, i.e., the N1 component of the ERP occurs earlier (N1 facilitation). However, it is not clear (1) whether N1 facilitation is based specifically on predictive rather than multisensory integration and (2) which particular properties of the visual cue it is based on. The current experiment used artificial AV stimuli in which visual cues predicted but did not co-occur with auditory cues. Visual form cues (high and low salience) and the auditory-visual pairing were manipulated so that auditory predictions could be based on form and timing or on timing only. The results showed that N1 facilitation occurred only for combined form and temporal predictions. These results suggest that faster auditory processing (as indicated by N1 facilitation) is based on predictive processing generated by a visual cue that clearly predicts both what and when the auditory stimulus will occur. Copyright © 2016. Published by Elsevier Ltd.
Joo, Sung Jun; White, Alex L; Strodtman, Douglas J; Yeatman, Jason D
2018-06-01
Reading is a complex process that involves low-level visual processing, phonological processing, and higher-level semantic processing. Given that skilled reading requires integrating information among these different systems, it is likely that reading difficulty-known as dyslexia-can emerge from impairments at any stage of the reading circuitry. To understand contributing factors to reading difficulties within individuals, it is necessary to diagnose the function of each component of the reading circuitry. Here, we investigated whether adults with dyslexia who have impairments in visual processing respond to a visual manipulation specifically targeting their impairment. We collected psychophysical measures of visual crowding and tested how each individual's reading performance was affected by increased text-spacing, a manipulation designed to alleviate severe crowding. Critically, we identified a sub-group of individuals with dyslexia showing elevated crowding and found that these individuals read faster when text was rendered with increased letter-, word- and line-spacing. Our findings point to a subtype of dyslexia involving elevated crowding and demonstrate that individuals benefit from interventions personalized to their specific impairments. Copyright © 2018 Elsevier Ltd. All rights reserved.
Proof of concept for using unmanned aerial vehicles for high mast pole and bridge inspections.
DOT National Transportation Integrated Search
2015-06-01
Bridges and high mast luminaires (HMLs) are key components of transportation infrastructures. Effective inspection : processes are crucial to maintain the structural integrity of these components. The most common approach for : inspections is visual ...
Kahn, Itamar; Wig, Gagan S.; Schacter, Daniel L.
2012-01-01
Asymmetrical specialization of cognitive processes across the cerebral hemispheres is a hallmark of healthy brain development and an important evolutionary trait underlying higher cognition in humans. While previous research, including studies of priming, divided visual field presentation, and split-brain patients, demonstrates a general pattern of right/left asymmetry of form-specific versus form-abstract visual processing, little is known about brain organization underlying this dissociation. Here, using repetition priming of complex visual scenes and high-resolution functional magnetic resonance imaging (MRI), we demonstrate asymmetrical form specificity of visual processing between the right and left hemispheres within a region known to be critical for processing of visual spatial scenes (parahippocampal place area [PPA]). Next, we use resting-state functional connectivity MRI analyses to demonstrate that this functional asymmetry is associated with differential intrinsic activity correlations of the right versus left PPA with regions critically involved in perceptual versus conceptual processing, respectively. Our results demonstrate that the PPA comprises lateralized subregions across the cerebral hemispheres that are engaged in functionally dissociable yet complementary components of visual scene analysis. Furthermore, this functional asymmetry is associated with differential intrinsic functional connectivity of the PPA with distinct brain areas known to mediate dissociable cognitive processes. PMID:21968568
Stevens, W Dale; Kahn, Itamar; Wig, Gagan S; Schacter, Daniel L
2012-08-01
Asymmetrical specialization of cognitive processes across the cerebral hemispheres is a hallmark of healthy brain development and an important evolutionary trait underlying higher cognition in humans. While previous research, including studies of priming, divided visual field presentation, and split-brain patients, demonstrates a general pattern of right/left asymmetry of form-specific versus form-abstract visual processing, little is known about brain organization underlying this dissociation. Here, using repetition priming of complex visual scenes and high-resolution functional magnetic resonance imaging (MRI), we demonstrate asymmetrical form specificity of visual processing between the right and left hemispheres within a region known to be critical for processing of visual spatial scenes (parahippocampal place area [PPA]). Next, we use resting-state functional connectivity MRI analyses to demonstrate that this functional asymmetry is associated with differential intrinsic activity correlations of the right versus left PPA with regions critically involved in perceptual versus conceptual processing, respectively. Our results demonstrate that the PPA comprises lateralized subregions across the cerebral hemispheres that are engaged in functionally dissociable yet complementary components of visual scene analysis. Furthermore, this functional asymmetry is associated with differential intrinsic functional connectivity of the PPA with distinct brain areas known to mediate dissociable cognitive processes.
Componential distribution analysis of food using near infrared ray image
NASA Astrophysics Data System (ADS)
Yamauchi, Hiroki; Kato, Kunihito; Yamamoto, Kazuhiko; Ogawa, Noriko; Ohba, Kimie
2008-11-01
The components of the food related to the "deliciousness" are usually evaluated by componential analysis. The component content and type of components in the food are determined by this analysis. However, componential analysis is not able to analyze measurements in detail, and the measurement is time consuming. We propose a method to measure the two-dimensional distribution of the component in food using a near infrared ray (IR) image. The advantage of our method is to be able to visualize the invisible components. Many components in food have characteristics such as absorption and reflection of light in the IR range. The component content is measured using subtraction between two wavelengths of near IR light. In this paper, we describe a method to measure the component of food using near IR image processing, and we show an application to visualize the saccharose in the pumpkin.
Qiao, Hong; Li, Yinlin; Li, Fengfu; Xi, Xuanyang; Wu, Wei
2016-10-01
Recently, many biologically inspired visual computational models have been proposed. The design of these models follows the related biological mechanisms and structures, and these models provide new solutions for visual recognition tasks. In this paper, based on the recent biological evidence, we propose a framework to mimic the active and dynamic learning and recognition process of the primate visual cortex. From principle point of view, the main contributions are that the framework can achieve unsupervised learning of episodic features (including key components and their spatial relations) and semantic features (semantic descriptions of the key components), which support higher level cognition of an object. From performance point of view, the advantages of the framework are as follows: 1) learning episodic features without supervision-for a class of objects without a prior knowledge, the key components, their spatial relations and cover regions can be learned automatically through a deep neural network (DNN); 2) learning semantic features based on episodic features-within the cover regions of the key components, the semantic geometrical values of these components can be computed based on contour detection; 3) forming the general knowledge of a class of objects-the general knowledge of a class of objects can be formed, mainly including the key components, their spatial relations and average semantic values, which is a concise description of the class; and 4) achieving higher level cognition and dynamic updating-for a test image, the model can achieve classification and subclass semantic descriptions. And the test samples with high confidence are selected to dynamically update the whole model. Experiments are conducted on face images, and a good performance is achieved in each layer of the DNN and the semantic description learning process. Furthermore, the model can be generalized to recognition tasks of other objects with learning ability.
Mental visualization of objects from cross-sectional images
Wu, Bing; Klatzky, Roberta L.; Stetten, George D.
2011-01-01
We extended the classic anorthoscopic viewing procedure to test a model of visualization of 3D structures from 2D cross-sections. Four experiments were conducted to examine key processes described in the model, localizing cross-sections within a common frame of reference and spatiotemporal integration of cross sections into a hierarchical object representation. Participants used a hand-held device to reveal a hidden object as a sequence of cross-sectional images. The process of localization was manipulated by contrasting two displays, in-situ vs. ex-situ, which differed in whether cross sections were presented at their source locations or displaced to a remote screen. The process of integration was manipulated by varying the structural complexity of target objects and their components. Experiments 1 and 2 demonstrated visualization of 2D and 3D line-segment objects and verified predictions about display and complexity effects. In Experiments 3 and 4, the visualized forms were familiar letters and numbers. Errors and orientation effects showed that displacing cross-sectional images to a remote display (ex-situ viewing) impeded the ability to determine spatial relationships among pattern components, a failure of integration at the object level. PMID:22217386
Touch to see: neuropsychological evidence of a sensory mirror system for touch.
Bolognini, Nadia; Olgiati, Elena; Xaiz, Annalisa; Posteraro, Lucio; Ferraro, Francesco; Maravita, Angelo
2012-09-01
The observation of touch can be grounded in the activation of brain areas underpinning direct tactile experience, namely the somatosensory cortices. What is the behavioral impact of such a mirror sensory activity on visual perception? To address this issue, we investigated the causal interplay between observed and felt touch in right brain-damaged patients, as a function of their underlying damaged visual and/or tactile modalities. Patients and healthy controls underwent a detection task, comprising visual stimuli depicting touches or without a tactile component. Touch and No-touch stimuli were presented in egocentric or allocentric perspectives. Seeing touches, regardless of the viewing perspective, differently affects visual perception depending on which sensory modality is damaged: In patients with a selective visual deficit, but without any tactile defect, the sight of touch improves the visual impairment; this effect is associated with a lesion to the supramarginal gyrus. In patients with a tactile deficit, but intact visual perception, the sight of touch disrupts visual processing, inducing a visual extinction-like phenomenon. This disruptive effect is associated with the damage of the postcentral gyrus. Hence, a damage to the somatosensory system can lead to a dysfunctional visual processing, and an intact somatosensory processing can aid visual perception.
Electrophysiological evidence for parts and wholes in visual face memory.
Towler, John; Eimer, Martin
2016-10-01
It is often assumed that upright faces are represented in a holistic fashion, while representations of inverted faces are essentially part-based. To assess this hypothesis, we recorded event-related potentials (ERPs) during a sequential face identity matching task where successively presented pairs of upright or inverted faces were either identical or differed with respect to their internal features, their external features, or both. Participants' task was to report on each trial whether the face pair was identical or different. To track the activation of visual face memory representations, we measured N250r components that emerge over posterior face-selective regions during the activation of visual face memory representations by a successful identity match. N250r components to full identity repetitions were smaller and emerged later for inverted as compared to upright faces, demonstrating that image inversion impairs face identity matching processes. For upright faces, N250r components were also elicited by partial repetitions of external or internal features, which suggest that the underlying identity matching processes are not exclusively based on non-decomposable holistic representations. However, the N250r to full identity repetitions was super-additive (i.e., larger than the sum of the two N250r components to partial repetitions of external or internal features) for upright faces, demonstrating that holistic representations were involved in identity matching processes. For inverted faces, N250r components to full and partial identity repetitions were strictly additive, indicating that the identity matching of external and internal features operated in an entirely part-based fashion. These results provide new electrophysiological evidence for qualitative differences between representations of upright and inverted faces in the occipital-temporal face processing system. Copyright © 2016 Elsevier Ltd. All rights reserved.
Information Processing in Auditory-Visual Conflict.
ERIC Educational Resources Information Center
Henker, Barbara A.; Whalen, Carol K.
1972-01-01
The present study used a set of bimodal (auditory-visual) conflict designed specifically for the preschool child. The basic component was a match-to-sample sequence designed to reduce the often-found contaminating factors in studies with young children: failure to understand or remember instructions, inability to perform the indicator response, or…
The Contribution of Visualization to Learning Computer Architecture
ERIC Educational Resources Information Center
Yehezkel, Cecile; Ben-Ari, Mordechai; Dreyfus, Tommy
2007-01-01
This paper describes a visualization environment and associated learning activities designed to improve learning of computer architecture. The environment, EasyCPU, displays a model of the components of a computer and the dynamic processes involved in program execution. We present the results of a research program that analysed the contribution of…
Yang, Yan-Li; Deng, Hong-Xia; Xing, Gui-Yang; Xia, Xiao-Luan; Li, Hai-Fang
2015-02-01
It is not clear whether the method used in functional brain-network related research can be applied to explore the feature binding mechanism of visual perception. In this study, we investigated feature binding of color and shape in visual perception. Functional magnetic resonance imaging data were collected from 38 healthy volunteers at rest and while performing a visual perception task to construct brain networks active during resting and task states. Results showed that brain regions involved in visual information processing were obviously activated during the task. The components were partitioned using a greedy algorithm, indicating the visual network existed during the resting state. Z-values in the vision-related brain regions were calculated, confirming the dynamic balance of the brain network. Connectivity between brain regions was determined, and the result showed that occipital and lingual gyri were stable brain regions in the visual system network, the parietal lobe played a very important role in the binding process of color features and shape features, and the fusiform and inferior temporal gyri were crucial for processing color and shape information. Experimental findings indicate that understanding visual feature binding and cognitive processes will help establish computational models of vision, improve image recognition technology, and provide a new theoretical mechanism for feature binding in visual perception.
Pavan, Andrea; Ghin, Filippo; Donato, Rita; Campana, Gianluca; Mather, George
2017-08-15
A long-held view of the visual system is that form and motion are independently analysed. However, there is physiological and psychophysical evidence of early interaction in the processing of form and motion. In this study, we used a combination of Glass patterns (GPs) and repetitive Transcranial Magnetic Stimulation (rTMS) to investigate in human observers the neural mechanisms underlying form-motion integration. GPs consist of randomly distributed dot pairs (dipoles) that induce the percept of an oriented stimulus. GPs can be either static or dynamic. Dynamic GPs have both a form component (i.e., orientation) and a non-directional motion component along the orientation axis. GPs were presented in two temporal intervals and observers were asked to discriminate the temporal interval containing the most coherent GP. rTMS was delivered over early visual area (V1/V2) and over area V5/MT shortly after the presentation of the GP in each interval. The results showed that rTMS applied over early visual areas affected the perception of static GPs, but the stimulation of area V5/MT did not affect observers' performance. On the other hand, rTMS was delivered over either V1/V2 or V5/MT strongly impaired the perception of dynamic GPs. These results suggest that early visual areas seem to be involved in the processing of the spatial structure of GPs, and interfering with the extraction of the global spatial structure also affects the extraction of the motion component, possibly interfering with early form-motion integration. However, visual area V5/MT is likely to be involved only in the processing of the motion component of dynamic GPs. These results suggest that motion and form cues may interact as early as V1/V2. Copyright © 2017 Elsevier Inc. All rights reserved.
Evidence for an attentional component of inhibition of return in visual search.
Pierce, Allison M; Crouse, Monique D; Green, Jessica J
2017-11-01
Inhibition of return (IOR) is typically described as an inhibitory bias against returning attention to a recently attended location as a means of promoting efficient visual search. Most studies examining IOR, however, either do not use visual search paradigms or do not effectively isolate attentional processes, making it difficult to conclusively link IOR to a bias in attention. Here, we recorded ERPs during a simple visual search task designed to isolate the attentional component of IOR to examine whether an inhibitory bias of attention is observed and, if so, how it influences visual search behavior. Across successive visual search displays, we found evidence of both a broad, hemisphere-wide inhibitory bias of attention along with a focal, target location-specific facilitation. When the target appeared in the same visual hemifield in successive searches, responses were slower and the N2pc component was reduced, reflecting a bias of attention away from the previously attended side of space. When the target occurred at the same location in successive searches, responses were facilitated and the P1 component was enhanced, likely reflecting spatial priming of the target. These two effects are combined in the response times, leading to a reduction in the IOR effect for repeated target locations. Using ERPs, however, these two opposing effects can be isolated in time, demonstrating that the inhibitory biasing of attention still occurs even when response-time slowing is ameliorated by spatial priming. © 2017 Society for Psychophysiological Research.
Intermodal Attention Shifts in Multimodal Working Memory.
Katus, Tobias; Grubert, Anna; Eimer, Martin
2017-04-01
Attention maintains task-relevant information in working memory (WM) in an active state. We investigated whether the attention-based maintenance of stimulus representations that were encoded through different modalities is flexibly controlled by top-down mechanisms that depend on behavioral goals. Distinct components of the ERP reflect the maintenance of tactile and visual information in WM. We concurrently measured tactile (tCDA) and visual contralateral delay activity (CDA) to track the attentional activation of tactile and visual information during multimodal WM. Participants simultaneously received tactile and visual sample stimuli on the left and right sides and memorized all stimuli on one task-relevant side. After 500 msec, an auditory retrocue indicated whether the sample set's tactile or visual content had to be compared with a subsequent test stimulus set. tCDA and CDA components that emerged simultaneously during the encoding phase were consistently reduced after retrocues that marked the corresponding (tactile or visual) modality as task-irrelevant. The absolute size of cue-dependent modulations was similar for the tCDA/CDA components and did not depend on the number of tactile/visual stimuli that were initially encoded into WM. Our results suggest that modality-specific maintenance processes in sensory brain regions are flexibly modulated by top-down influences that optimize multimodal WM representations for behavioral goals.
DVV: a taxonomy for mixed reality visualization in image guided surgery.
Kersten-Oertel, Marta; Jannin, Pierre; Collins, D Louis
2012-02-01
Mixed reality visualizations are increasingly studied for use in image guided surgery (IGS) systems, yet few mixed reality systems have been introduced for daily use into the operating room (OR). This may be the result of several factors: the systems are developed from a technical perspective, are rarely evaluated in the field, and/or lack consideration of the end user and the constraints of the OR. We introduce the Data, Visualization processing, View (DVV) taxonomy which defines each of the major components required to implement a mixed reality IGS system. We propose that these components be considered and used as validation criteria for introducing a mixed reality IGS system into the OR. A taxonomy of IGS visualization systems is a step toward developing a common language that will help developers and end users discuss and understand the constituents of a mixed reality visualization system, facilitating a greater presence of future systems in the OR. We evaluate the DVV taxonomy based on its goodness of fit and completeness. We demonstrate the utility of the DVV taxonomy by classifying 17 state-of-the-art research papers in the domain of mixed reality visualization IGS systems. Our classification shows that few IGS visualization systems' components have been validated and even fewer are evaluated.
General visual robot controller networks via artificial evolution
NASA Astrophysics Data System (ADS)
Cliff, David; Harvey, Inman; Husbands, Philip
1993-08-01
We discuss recent results from our ongoing research concerning the application of artificial evolution techniques (i.e., an extended form of genetic algorithm) to the problem of developing `neural' network controllers for visually guided robots. The robot is a small autonomous vehicle with extremely low-resolution vision, employing visual sensors which could readily be constructed from discrete analog components. In addition to visual sensing, the robot is equipped with a small number of mechanical tactile sensors. Activity from the sensors is fed to a recurrent dynamical artificial `neural' network, which acts as the robot controller, providing signals to motors governing the robot's motion. Prior to presentation of new results, this paper summarizes our rationale and past work, which has demonstrated that visually guided control networks can arise without any explicit specification that visual processing should be employed: the evolutionary process opportunistically makes use of visual information if it is available.
Harris, Joseph A; Wu, Chien-Te; Woldorff, Marty G
2011-06-07
It is generally agreed that considerable amounts of low-level sensory processing of visual stimuli can occur without conscious awareness. On the other hand, the degree of higher level visual processing that occurs in the absence of awareness is as yet unclear. Here, event-related potential (ERP) measures of brain activity were recorded during a sandwich-masking paradigm, a commonly used approach for attenuating conscious awareness of visual stimulus content. In particular, the present study used a combination of ERP activation contrasts to track both early sensory-processing ERP components and face-specific N170 ERP activations, in trials with versus without awareness. The electrophysiological measures revealed that the sandwich masking abolished the early face-specific N170 neural response (peaking at ~170 ms post-stimulus), an effect that paralleled the abolition of awareness of face versus non-face image content. Furthermore, however, the masking appeared to render a strong attenuation of earlier feedforward visual sensory-processing signals. This early attenuation presumably resulted in insufficient information being fed into the higher level visual system pathways specific to object category processing, thus leading to unawareness of the visual object content. These results support a coupling of visual awareness and neural indices of face processing, while also demonstrating an early low-level mechanism of interference in sandwich masking.
The role of aging in intra-item and item-context binding processes in visual working memory.
Peterson, Dwight J; Naveh-Benjamin, Moshe
2016-11-01
Aging is accompanied by declines in both working memory and long-term episodic memory processes. Specifically, important age-related memory deficits are characterized by performance impairments exhibited by older relative to younger adults when binding distinct components into a single integrated representation, despite relatively intact memory for the individual components. While robust patterns of age-related binding deficits are prevalent in studies of long-term episodic memory, observations of such deficits in visual working memory (VWM) may depend on the specific type of binding process being examined. For instance, a number of studies indicate that processes involved in item-context binding of items to occupied spatial locations within visual working memory are impaired in older relative to younger adults. Other findings suggest that intra-item binding of visual surface features (e.g., color, shape), compared to memory for single features, within visual working memory, remains relatively intact. Here, we examined each of these binding processes in younger and older adults under both optimal conditions (i.e., no concurrent load) and concurrent load (e.g., articulatory suppression, backward counting). Experiment 1 revealed an age-related intra-item binding deficit for surface features under no concurrent load but not when articulatory suppression was required. In contrast, in Experiments 2 and 3, we observed an age-related item-context binding deficit regardless of the level of concurrent load. These findings reveal that the influence of concurrent load on distinct binding processes within VWM, potentially those supported by rehearsal, is an important factor mediating the presence or absence of age-related binding deficits within VWM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
González-Hernández, J A; Pita-Alcorta, C; Padrón, A; Finalé, A; Galán, L; Martínez, E; Díaz-Comas, L; Samper-González, J A; Lencer, R; Marot, M
2014-10-01
Basic visual dysfunctions are commonly reported in schizophrenia; however their value as diagnostic tools remains uncertain. This study reports a novel electrophysiological approach using checkerboard visual evoked potentials (VEP). Sources of spectral resolution VEP-components C1, P1 and N1 were estimated by LORETA, and the band-effects (BSE) on these estimated sources were explored in each subject. BSEs were Z-transformed for each component and relationships with clinical variables were assessed. Clinical effects were evaluated by ROC-curves and predictive values. Forty-eight patients with schizophrenia (SZ) and 55 healthy controls participated in the study. For each of the 48 patients, the three VEP components were localized to both dorsal and ventral brain areas and also deviated from a normal distribution. P1 and N1 deviations were independent of treatment, illness chronicity or gender. Results from LORETA also suggest that deficits in thalamus, posterior cingulum, precuneus, superior parietal and medial occipitotemporal areas were associated with symptom severity. While positive symptoms were more strongly related to sensory processing deficits (P1), negative symptoms were more strongly related to perceptual processing dysfunction (N1). Clinical validation revealed positive and negative predictive values for correctly classifying SZ of 100% and 77%, respectively. Classification in an additional independent sample of 30 SZ corroborated these results. In summary, this novel approach revealed basic visual dysfunctions in all patients with schizophrenia, suggesting these visual dysfunctions represent a promising candidate as a biomarker for schizophrenia. Copyright © 2014 Elsevier B.V. All rights reserved.
Visual unit analysis: a descriptive approach to landscape assessment
R. J. Tetlow; S. R. J. Sheppard
1979-01-01
Analysis of the visible attributes of landscapes is an important component of the planning process. When landscapes are at regional scale, economical and effective methodologies are critical. The Visual Unit concept appears to offer a logical and useful framework for description and evaluation. The concept subdivides landscape into coherent, spatially-defined units....
The Role of Visual Learning in Improving Students' High-Order Thinking Skills
ERIC Educational Resources Information Center
Raiyn, Jamal
2016-01-01
Various concepts have been introduced to improve students' analytical thinking skills based on problem based learning (PBL). This paper introduces a new concept to increase student's analytical thinking skills based on a visual learning strategy. Such a strategy has three fundamental components: a teacher, a student, and a learning process. The…
The Impact of New Electronic Imaging Systems on U.S. Air Force Visual Information Professionals.
1993-06-01
modernizing the functions left in their control. This process started by converting combat camera assets from 16mm film to Betacam "camcorder’ systems. Combat...upgraded to computer-controlled editing with 1-inch helical machines or component-video Betacam equipment. For the base visual information centers, new
Effects of methylphenidate on working memory components: influence of measurement.
Bedard, Anne-Claude; Jain, Umesh; Johnson, Sheilah Hogg; Tannock, Rosemary
2007-09-01
To investigate the effects of methylphenidate (MPH) on components of working memory (WM) in attention-deficit hyperactivity disorder (ADHD) and determine the responsiveness of WM measures to MPH. Participants were a clinical sample of 50 children and adolescents with ADHD, aged 6 to 16 years old, who participated in an acute randomized, double-blind, placebo-controlled, crossover trial with single challenges of three MPH doses. Four components of WM were investigated, which varied in processing demands (storage versus manipulation of information) and modality (auditory-verbal; visual-spatial), each of which was indexed by a minimum of two separate measures. MPH improved the ability to store visual-spatial information irrespective of instrument used, but had no effects on the storage of auditory-verbal information. By contrast, MPH enhanced the ability to manipulate both auditory-verbal and visual-spatial information, although effects were instrument specific in both cases. MPH effects on WM are selective: they vary as a function of WM component and measurement.
Kim, Heejung; Hahm, Jarang; Lee, Hyekyoung; Kang, Eunjoo; Kang, Hyejin; Lee, Dong Soo
2015-05-01
The human brain naturally integrates audiovisual information to improve speech perception. However, in noisy environments, understanding speech is difficult and may require much effort. Although the brain network is supposed to be engaged in speech perception, it is unclear how speech-related brain regions are connected during natural bimodal audiovisual or unimodal speech perception with counterpart irrelevant noise. To investigate the topological changes of speech-related brain networks at all possible thresholds, we used a persistent homological framework through hierarchical clustering, such as single linkage distance, to analyze the connected component of the functional network during speech perception using functional magnetic resonance imaging. For speech perception, bimodal (audio-visual speech cue) or unimodal speech cues with counterpart irrelevant noise (auditory white-noise or visual gum-chewing) were delivered to 15 subjects. In terms of positive relationship, similar connected components were observed in bimodal and unimodal speech conditions during filtration. However, during speech perception by congruent audiovisual stimuli, the tighter couplings of left anterior temporal gyrus-anterior insula component and right premotor-visual components were observed than auditory or visual speech cue conditions, respectively. Interestingly, visual speech is perceived under white noise by tight negative coupling in the left inferior frontal region-right anterior cingulate, left anterior insula, and bilateral visual regions, including right middle temporal gyrus, right fusiform components. In conclusion, the speech brain network is tightly positively or negatively connected, and can reflect efficient or effortful processes during natural audiovisual integration or lip-reading, respectively, in speech perception.
ATLAS Eventlndex monitoring system using the Kibana analytics and visualization platform
NASA Astrophysics Data System (ADS)
Barberis, D.; Cárdenas Zárate, S. E.; Favareto, A.; Fernandez Casani, A.; Gallas, E. J.; Garcia Montoro, C.; Gonzalez de la Hoz, S.; Hrivnac, J.; Malon, D.; Prokoshin, F.; Salt, J.; Sanchez, J.; Toebbicke, R.; Yuan, R.; ATLAS Collaboration
2016-10-01
The ATLAS EventIndex is a data catalogue system that stores event-related metadata for all (real and simulated) ATLAS events, on all processing stages. As it consists of different components that depend on other applications (such as distributed storage, and different sources of information) we need to monitor the conditions of many heterogeneous subsystems, to make sure everything is working correctly. This paper describes how we gather information about the EventIndex components and related subsystems: the Producer-Consumer architecture for data collection, health parameters from the servers that run EventIndex components, EventIndex web interface status, and the Hadoop infrastructure that stores EventIndex data. This information is collected, processed, and then displayed using CERN service monitoring software based on the Kibana analytic and visualization package, provided by CERN IT Department. EventIndex monitoring is used both by the EventIndex team and ATLAS Distributed Computing shifts crew.
Distinct Effects of Trial-Driven and Task Set-Related Control in Primary Visual Cortex
Vaden, Ryan J.; Visscher, Kristina M.
2015-01-01
Task sets are task-specific configurations of cognitive processes that facilitate task-appropriate reactions to stimuli. While it is established that the trial-by-trial deployment of visual attention to expected stimuli influences neural responses in primary visual cortex (V1) in a retinotopically specific manner, it is not clear whether the mechanisms that help maintain a task set over many trials also operate with similar retinotopic specificity. Here, we address this question by using BOLD fMRI to characterize how portions of V1 that are specialized for different eccentricities respond during distinct components of an attention-demanding discrimination task: cue-driven preparation for a trial, trial-driven processing, task-initiation at the beginning of a block of trials, and task-maintenance throughout a block of trials. Tasks required either unimodal attention to an auditory or a visual stimulus or selective intermodal attention to the visual or auditory component of simultaneously presented visual and auditory stimuli. We found that while the retinotopic patterns of trial-driven and cue-driven activity depended on the attended stimulus, the retinotopic patterns of task-initiation and task-maintenance activity did not. Further, only the retinotopic patterns of trial-driven activity were found to depend on the presence of intermodal distraction. Participants who performed well on the intermodal selective attention tasks showed strong task-specific modulations of both trial-driven and task-maintenance activity. Importantly, task-related modulations of trial-driven and task-maintenance activity were in opposite directions. Together, these results confirm that there are (at least) two different processes for top-down control of V1: One, working trial-by-trial, differently modulates activity across different eccentricity sectors—portions of V1 corresponding to different visual eccentricities. The second process works across longer epochs of task performance, and does not differ among eccentricity sectors. These results are discussed in the context of previous literature examining top-down control of visual cortical areas. PMID:26163806
Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman
2013-06-01
The capacity of visual-spatial working memory (WM) declines from early to late adulthood. Recent attempts at identifying neural correlates of WM capacity decline have focused on the maintenance phase of WM. Here, we investigate neural mechanisms during the encoding phase as another potential mechanism contributing to adult age differences in WM capacity. We used electroencephalography to track neural activity during encoding and maintenance on a millisecond timescale in 35 younger and 35 older adults performing a visual-spatial WM task. As predicted, we observed pronounced age differences in ERP indicators of WM encoding: Younger adults showed attentional selection during item encoding (N2pc component), but this selection mechanism was greatly attenuated in older adults. Conversely, older adults showed more pronounced signs of early perceptual stimulus processing (N1 component) than younger adults. The amplitude modulation of the N1 component predicted WM capacity in older adults, whereas the attentional amplitude modulation of the N2pc component predicted WM capacity in younger adults. Our findings suggest that adult age differences in mechanisms of WM encoding contribute to adult age differences in limits of visual-spatial WM capacity. Copyright © 2013 Elsevier Inc. All rights reserved.
Ergonomics improvements of the visual inspection process in a printed circuit assembly factory.
Yeow, Paul H P; Sen, Rabindra Nath
2004-01-01
An ergonomics improvement study was conducted on the visual inspection process of a printed circuit assembly (PCA) factory. The process was studied through subjective assessment and direct observation. Three problems were identified: operators' eye problems, insufficient time for inspection and ineffective visual inspection. These problems caused a huge yearly rejection cost of US 298,240 dollars, poor quality, customer dissatisfaction and poor occupational health and safety. Ergonomics interventions were made to rectify the problems: reduced usage of a magnifying glass, the use of less glaring inspection templates, inspection of only electrically non-tested components and introduction of a visual inspection sequence. The interventions produced savings in rejection cost, reduced operators' eye strain, headaches and watery eyes, lowered the defect percentage at customers' sites and increased the factory's productivity and customer satisfaction.
Real-Time Monitoring of Scada Based Control System for Filling Process
NASA Astrophysics Data System (ADS)
Soe, Aung Kyaw; Myint, Aung Naing; Latt, Maung Maung; Theingi
2008-10-01
This paper is a design of real-time monitoring for filling system using Supervisory Control and Data Acquisition (SCADA). The monitoring of production process is described in real-time using Visual Basic.Net programming under Visual Studio 2005 software without SCADA software. The software integrators are programmed to get the required information for the configuration screens. Simulation of components is expressed on the computer screen using parallel port between computers and filling devices. The programs of real-time simulation for the filling process from the pure drinking water industry are provided.
When is the right hemisphere holistic and when is it not? The case of Chinese character recognition.
Chung, Harry K S; Leung, Jacklyn C Y; Wong, Vienne M Y; Hsiao, Janet H
2018-05-15
Holistic processing (HP) has long been considered a characteristic of right hemisphere (RH) processing. Indeed, holistic face processing is typically associated with left visual field (LVF)/RH processing advantages. Nevertheless, expert Chinese character recognition involves reduced HP and increased RH lateralization, presenting a counterexample. Recent modeling research suggests that RH processing may be associated with an increase or decrease in HP, depending on whether spacing or component information was used respectively. Since expert Chinese character recognition involves increasing sensitivity to components while deemphasizing spacing information, RH processing in experts may be associated with weaker HP than novices. Consistent with this hypothesis, in a divided visual field paradigm, novices exhibited HP only in the LVF/RH, whereas experts showed no HP in either visual field. This result suggests that the RH may flexibly switch between part-based and holistic representations, consistent with recent fMRI findings. The RH's advantage in global/low spatial frequency processing is suggested to be relative to the task relevant frequency range. Thus, its use of holistic and part-based representations may depend on how attention is allocated for task relevant information. This study provides the first behavioral evidence showing how type of information used for processing modulates perceptual representations in the RH. Copyright © 2018 Elsevier B.V. All rights reserved.
PROVAT: a tool for Voronoi tessellation analysis of protein structures and complexes.
Gore, Swanand P; Burke, David F; Blundell, Tom L
2005-08-01
Voronoi tessellation has proved to be a useful tool in protein structure analysis. We have developed PROVAT, a versatile public domain software that enables computation and visualization of Voronoi tessellations of proteins and protein complexes. It is a set of Python scripts that integrate freely available specialized software (Qhull, Pymol etc.) into a pipeline. The calculation component of the tool computes Voronoi tessellation of a given protein system in a way described by a user-supplied XML recipe and stores resulting neighbourhood information as text files with various styles. The Python pickle file generated in the process is used by the visualization component, a Pymol plug-in, that offers a GUI to explore the tessellation visually. PROVAT source code can be downloaded from http://raven.bioc.cam.ac.uk/~swanand/Provat1, which also provides a webserver for its calculation component, documentation and examples.
Stepwise emergence of the face-sensitive N170 event-related potential component.
Jemel, Boutheina; Schuller, Anne-Marie; Cheref-Khan, Yasémine; Goffaux, Valérie; Crommelinck, Marc; Bruyer, Raymond
2003-11-14
The present study used a parametric design to characterize early event-related potentials (ERP) to face stimuli embedded in gradually decreasing random noise levels. For both N170 and the vertex positive potential (VPP) there was a linear increase in amplitude and decrease in latency with decreasing levels of noise. In contrast, the earlier visual P1 component was stable across noise levels. The P1/N170 dissociation suggests not only a functional dissociation between low and high-level visual processing of faces but also that the N170 reflects the integration of sensorial information into a unitary representation. In addition, the N170/VPP association supports the view that they reflect the same processes operating when viewing faces.
Effects of set-size and lateral masking in visual search.
Põder, Endel
2004-01-01
In the present research, the roles of lateral masking and central processing limitations in visual search were studied. Two search conditions were used: (1) target differed from distractors by presence/absence of a simple feature; (2) target differed by relative position of the same components only. The number of displayed stimuli (set-size) and the distance between neighbouring stimuli were varied as independently as possible in order to measure the effect of both. The effect of distance between stimuli (lateral masking) was found to be similar in both conditions. The effect of set-size was much larger for relative position stimuli. The results support the view that perception of relative position of stimulus components is limited mainly by the capacity of central processing.
Multiple foci of spatial attention in multimodal working memory.
Katus, Tobias; Eimer, Martin
2016-11-15
The maintenance of sensory information in working memory (WM) is mediated by the attentional activation of stimulus representations that are stored in perceptual brain regions. Using event-related potentials (ERPs), we measured tactile and visual contralateral delay activity (tCDA/CDA components) in a bimodal WM task to concurrently track the attention-based maintenance of information stored in anatomically segregated (somatosensory and visual) brain areas. Participants received tactile and visual sample stimuli on both sides, and in different blocks, memorized these samples on the same side or on opposite sides. After a retention delay, memory was unpredictably tested for touch or vision. In the same side blocks, tCDA and CDA components simultaneously emerged over the same hemisphere, contralateral to the memorized tactile/visual sample set. In opposite side blocks, these two components emerged over different hemispheres, but had the same sizes and onset latencies as in the same side condition. Our results reveal distinct foci of tactile and visual spatial attention that were concurrently maintained on task-relevant stimulus representations in WM. The independence of spatially-specific biasing mechanisms for tactile and visual WM content suggests that multimodal information is stored in distributed perceptual brain areas that are activated through modality-specific processes that can operate simultaneously and largely independently of each other. Copyright © 2016 Elsevier Inc. All rights reserved.
Components of working memory and visual selective attention.
Burnham, Bryan R; Sabia, Matthew; Langan, Catherine
2014-02-01
Load theory (Lavie, N., Hirst, A., De Fockert, J. W., & Viding, E. [2004]. Load theory of selective attention and cognitive control. Journal of Experimental Psychology: General, 133, 339-354.) proposes that control of attention depends on the amount and type of load that is imposed by current processing. Specifically, perceptual load should lead to efficient distractor rejection, whereas working memory load (dual-task coordination) should hinder distractor rejection. Studies support load theory's prediction that working memory load will lead to larger distractor effects; however, these studies used secondary tasks that required only verbal working memory and the central executive. The present study examined which other working memory components (visual, spatial, and phonological) influence visual selective attention. Subjects completed an attentional capture task alone (single-task) or while engaged in a working memory task (dual-task). Results showed that along with the central executive, visual and spatial working memory influenced selective attention, but phonological working memory did not. Specifically, attentional capture was larger when visual or spatial working memory was loaded, but phonological working memory load did not affect attentional capture. The results are consistent with load theory and suggest specific components of working memory influence visual selective attention. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Schmithorst, Vincent J
2005-04-01
Music perception is a quite complex cognitive task, involving the perception and integration of various elements including melody, harmony, pitch, rhythm, and timbre. A preliminary functional MRI investigation of music perception was performed, using a simplified passive listening task. Group independent component analysis (ICA) was used to separate out various components involved in music processing, as the hemodynamic responses are not known a priori. Various components consistent with auditory processing, expressive language, syntactic processing, and visual association were found. The results are discussed in light of various hypotheses regarding modularity of music processing and its overlap with language processing. The results suggest that, while some networks overlap with ones used for language processing, music processing may involve its own domain-specific processing subsystems.
Perceptual deficits of object identification: apperceptive agnosia.
Milner, A David; Cavina-Pratesi, Cristiana
2018-01-01
It is argued here that apperceptive object agnosia (generally now known as visual form agnosia) is in reality not a kind of agnosia, but rather a form of "imperception" (to use the term coined by Hughlings Jackson). We further argue that its proximate cause is a bilateral loss (or functional loss) of the visual form processing systems embodied in the human lateral occipital cortex (area LO). According to the dual-system model of cortical visual processing elaborated by Milner and Goodale (2006), area LO constitutes a crucial component of the ventral stream, and indeed is essential for providing the figural qualities inherent in our normal visual perception of the world. According to this account, the functional loss of area LO would leave only spared visual areas within the occipito-parietal dorsal stream - dedicated to the control of visually-guided actions - potentially able to provide some aspects of visual shape processing in patients with apperceptive agnosia. We review the relevant evidence from such individuals, concentrating particularly on the well-researched patient D.F. We conclude that studies of this kind can provide useful pointers to an understanding of the processing characteristics of parietal-lobe visual mechanisms and their interactions with occipitotemporal perceptual systems in the guidance of action. Copyright © 2018 Elsevier B.V. All rights reserved.
Alderson, R Matt; Kasper, Lisa J; Patros, Connor H G; Hudec, Kristen L; Tarle, Stephanie J; Lea, Sarah E
2015-01-01
The episodic buffer component of working memory was examined in children with attention deficit/hyperactivity disorder (ADHD) and typically developing peers (TD). Thirty-two children (ADHD = 16, TD = 16) completed three versions of a phonological working memory task that varied with regard to stimulus presentation modality (auditory, visual, or dual auditory and visual), as well as a visuospatial task. Children with ADHD experienced the largest magnitude working memory deficits when phonological stimuli were presented via a unimodal, auditory format. Their performance improved during visual and dual modality conditions but remained significantly below the performance of children in the TD group. In contrast, the TD group did not exhibit performance differences between the auditory- and visual-phonological conditions but recalled significantly more stimuli during the dual-phonological condition. Furthermore, relative to TD children, children with ADHD recalled disproportionately fewer phonological stimuli as set sizes increased, regardless of presentation modality. Finally, an examination of working memory components indicated that the largest magnitude between-group difference was associated with the central executive. Collectively, these findings suggest that ADHD-related working memory deficits reflect a combination of impaired central executive and phonological storage/rehearsal processes, as well as an impaired ability to benefit from bound multimodal information processed by the episodic buffer.
Virtual commissioning of automated micro-optical assembly
NASA Astrophysics Data System (ADS)
Schlette, Christian; Losch, Daniel; Haag, Sebastian; Zontar, Daniel; Roßmann, Jürgen; Brecher, Christian
2015-02-01
In this contribution, we present a novel approach to enable virtual commissioning for process developers in micro-optical assembly. Our approach aims at supporting micro-optics experts to effectively develop assisted or fully automated assembly solutions without detailed prior experience in programming while at the same time enabling them to easily implement their own libraries of expert schemes and algorithms for handling optical components. Virtual commissioning is enabled by a 3D simulation and visualization system in which the functionalities and properties of automated systems are modeled, simulated and controlled based on multi-agent systems. For process development, our approach supports event-, state- and time-based visual programming techniques for the agents and allows for their kinematic motion simulation in combination with looped-in simulation results for the optical components. First results have been achieved for simply switching the agents to command the real hardware setup after successful process implementation and validation in the virtual environment. We evaluated and adapted our system to meet the requirements set by industrial partners-- laser manufacturers as well as hardware suppliers of assembly platforms. The concept is applied to the automated assembly of optical components for optically pumped semiconductor lasers and positioning of optical components for beam-shaping
Effect of ethanol on the visual-evoked potential in rat: dynamics of ON and OFF responses.
Dulinskas, Redas; Buisas, Rokas; Vengeliene, Valentina; Ruksenas, Osvaldas
2017-01-01
The effect of acute ethanol administration on the flash visual-evoked potential (VEP) was investigated in numerous studies. However, it is still unclear which brain structures are responsible for the differences observed in stimulus onset (ON) and offset (OFF) responses and how these responses are modulated by ethanol. The aim of our study was to investigate the pattern of ON and OFF responses in the visual system, measured as amplitude and latency of each VEP component following acute administration of ethanol. VEPs were recorded at the onset and offset of a 500 ms visual stimulus in anesthetized male Wistar rats. The effect of alcohol on VEP latency and amplitude was measured for one hour after injection of 2 g/kg ethanol dose. Three VEP components - N63, P89 and N143 - were analyzed. Our results showed that, except for component N143, ethanol increased the latency of both ON and OFF responses in a similar manner. The latency of N143 during OFF response was not affected by ethanol but its amplitude was reduced. Our study demonstrated that the activation of the visual system during the ON response to a 500 ms visual stimulus is qualitatively different from that during the OFF response. Ethanol interfered with processing of the stimulus duration at the level of the visual cortex and reduced the activation of cortical regions.
Fisher, Katie; Towler, John; Eimer, Martin
2016-01-08
It is frequently assumed that facial identity and facial expression are analysed in functionally and anatomically distinct streams within the core visual face processing system. To investigate whether expression and identity interact during the visual processing of faces, we employed a sequential matching procedure where participants compared either the identity or the expression of two successively presented faces, and ignored the other irrelevant dimension. Repetitions versus changes of facial identity and expression were varied independently across trials, and event-related potentials (ERPs) were recorded during task performance. Irrelevant facial identity and irrelevant expression both interfered with performance in the expression and identity matching tasks. These symmetrical interference effects show that neither identity nor expression can be selectively ignored during face matching, and suggest that they are not processed independently. N250r components to identity repetitions that reflect identity matching mechanisms in face-selective visual cortex were delayed and attenuated when there was an expression change, demonstrating that facial expression interferes with visual identity matching. These findings provide new evidence for interactions between facial identity and expression within the core visual processing system, and question the hypothesis that these two attributes are processed independently. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Hauptman, Anna R.
Two experiments involving 42 students from the Model Secondary School for the Deaf investigated both the visual and tactile components in the processing of spatial information. Test measures used were the Figures Rotations Test, Group Embedded Figures Test, and Tactile Rotations Test. The study suggested that spatial reasoning is a determining…
The Influence of Semantic Neighbours on Visual Word Recognition
ERIC Educational Resources Information Center
Yates, Mark
2012-01-01
Although it is assumed that semantics is a critical component of visual word recognition, there is still much that we do not understand. One recent way of studying semantic processing has been in terms of semantic neighbourhood (SN) density, and this research has shown that semantic neighbours facilitate lexical decisions. However, it is not clear…
ERIC Educational Resources Information Center
de la Rosa, Stephan; Choudhery, Rabia N.; Chatziastros, Astros
2011-01-01
Recent evidence suggests that the recognition of an object's presence and its explicit recognition are temporally closely related. Here we re-examined the time course (using a fine and a coarse temporal resolution) and the sensitivity of three possible component processes of visual object recognition. In particular, participants saw briefly…
Visual processing of music notation: a study of event-related potentials.
Lee, Horng-Yih; Wang, Yu-Sin
2011-04-01
In reading music, the acquisition of pitch information depends mostly on the spatial position of notes, hence more spatial processing, whereas the acquisition of temporal information depends mostly on the visual features of notes and object recognition. This study used both electrophysiological and behavioral methods to compare the processing of pitch and duration in reading single musical notes. It was observed that in the early stage of note reading, identification of pitch could elicit greater N1 and N2 amplitude than identification of duration at the parietal lobe electrodes. In the later stages of note reading, identifying pitch elicited a greater negative slow wave at parietal electrodes than did identifying note duration. The sustained contribution of parietal processes for pitch suggests that the dorsal pathway is essential for pitch processing. However, the duration task did not elicit greater amplitude of any early ERP components than the pitch task at temporal electrodes. Accordingly, a double dissociation, suggesting involvement of the dorsal visual stream, was not observed in spatial pitch processing and ventral visual stream in processing of note durations.
An amodal shared resource model of language-mediated visual attention
Smith, Alastair C.; Monaghan, Padraic; Huettig, Falk
2013-01-01
Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behavior and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language-mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language-mediated eye gaze. PMID:23966967
Lahnakoski, Juha M; Salmi, Juha; Jääskeläinen, Iiro P; Lampinen, Jouko; Glerean, Enrico; Tikka, Pia; Sams, Mikko
2012-01-01
Understanding how the brain processes stimuli in a rich natural environment is a fundamental goal of neuroscience. Here, we showed a feature film to 10 healthy volunteers during functional magnetic resonance imaging (fMRI) of hemodynamic brain activity. We then annotated auditory and visual features of the motion picture to inform analysis of the hemodynamic data. The annotations were fitted to both voxel-wise data and brain network time courses extracted by independent component analysis (ICA). Auditory annotations correlated with two independent components (IC) disclosing two functional networks, one responding to variety of auditory stimulation and another responding preferentially to speech but parts of the network also responding to non-verbal communication. Visual feature annotations correlated with four ICs delineating visual areas according to their sensitivity to different visual stimulus features. In comparison, a separate voxel-wise general linear model based analysis disclosed brain areas preferentially responding to sound energy, speech, music, visual contrast edges, body motion and hand motion which largely overlapped the results revealed by ICA. Differences between the results of IC- and voxel-based analyses demonstrate that thorough analysis of voxel time courses is important for understanding the activity of specific sub-areas of the functional networks, while ICA is a valuable tool for revealing novel information about functional connectivity which need not be explained by the predefined model. Our results encourage the use of naturalistic stimuli and tasks in cognitive neuroimaging to study how the brain processes stimuli in rich natural environments.
Lahnakoski, Juha M.; Salmi, Juha; Jääskeläinen, Iiro P.; Lampinen, Jouko; Glerean, Enrico; Tikka, Pia; Sams, Mikko
2012-01-01
Understanding how the brain processes stimuli in a rich natural environment is a fundamental goal of neuroscience. Here, we showed a feature film to 10 healthy volunteers during functional magnetic resonance imaging (fMRI) of hemodynamic brain activity. We then annotated auditory and visual features of the motion picture to inform analysis of the hemodynamic data. The annotations were fitted to both voxel-wise data and brain network time courses extracted by independent component analysis (ICA). Auditory annotations correlated with two independent components (IC) disclosing two functional networks, one responding to variety of auditory stimulation and another responding preferentially to speech but parts of the network also responding to non-verbal communication. Visual feature annotations correlated with four ICs delineating visual areas according to their sensitivity to different visual stimulus features. In comparison, a separate voxel-wise general linear model based analysis disclosed brain areas preferentially responding to sound energy, speech, music, visual contrast edges, body motion and hand motion which largely overlapped the results revealed by ICA. Differences between the results of IC- and voxel-based analyses demonstrate that thorough analysis of voxel time courses is important for understanding the activity of specific sub-areas of the functional networks, while ICA is a valuable tool for revealing novel information about functional connectivity which need not be explained by the predefined model. Our results encourage the use of naturalistic stimuli and tasks in cognitive neuroimaging to study how the brain processes stimuli in rich natural environments. PMID:22496909
TVA-based assessment of visual attentional functions in developmental dyslexia
Bogon, Johanna; Finke, Kathrin; Stenneken, Prisca
2014-01-01
There is an ongoing debate whether an impairment of visual attentional functions constitutes an additional or even an isolated deficit of developmental dyslexia (DD). Especially performance in tasks that require the processing of multiple visual elements in parallel has been reported to be impaired in DD. We review studies that used parameter-based assessment for identifying and quantifying impaired aspect(s) of visual attention that underlie this multi-element processing deficit in DD. These studies used the mathematical framework provided by the “theory of visual attention” (Bundesen, 1990) to derive quantitative measures of general attentional resources and attentional weighting aspects on the basis of behavioral performance in whole- and partial-report tasks. Based on parameter estimates in children and adults with DD, the reviewed studies support a slowed perceptual processing speed as an underlying primary deficit in DD. Moreover, a reduction in visual short term memory storage capacity seems to present a modulating component, contributing to difficulties in written language processing. Furthermore, comparing the spatial distributions of attentional weights in children and adults suggests that having limited reading and writing skills might impair the development of a slight leftward bias, that is typical for unimpaired adult readers. PMID:25360129
Neural basis of hierarchical visual form processing of Japanese Kanji characters.
Higuchi, Hiroki; Moriguchi, Yoshiya; Murakami, Hiroki; Katsunuma, Ruri; Mishima, Kazuo; Uno, Akira
2015-12-01
We investigated the neural processing of reading Japanese Kanji characters, which involves unique hierarchical visual processing, including the recognition of visual components specific to Kanji, such as "radicals." We performed functional MRI to measure brain activity in response to hierarchical visual stimuli containing (1) real Kanji characters (complete structure with semantic information), (2) pseudo Kanji characters (subcomponents without complete character structure), (3) artificial characters (character fragments), and (4) checkerboard (simple photic stimuli). As we expected, the peaks of the activation in response to different stimulus types were aligned within the left occipitotemporal visual region along the posterior-anterior axis in order of the structural complexity of the stimuli, from fragments (3) to complete characters (1). Moreover, only the real Kanji characters produced functional connectivity between the left inferotemporal area and the language area (left inferior frontal triangularis), while pseudo Kanji characters induced connectivity between the left inferotemporal area and the bilateral cerebellum and left putamen. Visual processing of Japanese Kanji takes place in the left occipitotemporal cortex, with a clear hierarchy within the region such that the neural activation differentiates the elements in Kanji characters' fragments, subcomponents, and semantics, with different patterns of connectivity to remote regions among the elements.
Johari, Masoumeh; Abdollahzadeh, Milad; Esmaeili, Farzad; Sakhamanesh, Vahideh
2018-01-01
Dental cone beam computed tomography (CBCT) images suffer from severe metal artifacts. These artifacts degrade the quality of acquired image and in some cases make it unsuitable to use. Streaking artifacts and cavities around teeth are the main reason of degradation. In this article, we have proposed a new artifact reduction algorithm which has three parallel components. The first component extracts teeth based on the modeling of image histogram with a Gaussian mixture model. Striking artifact reduction component reduces artifacts using converting image into the polar domain and applying morphological filtering. The third component fills cavities through a simple but effective morphological filtering operation. Finally, results of these three components are combined into a fusion step to create a visually good image which is more compatible to human visual system. Results show that the proposed algorithm reduces artifacts of dental CBCT images and produces clean images.
Johari, Masoumeh; Abdollahzadeh, Milad; Esmaeili, Farzad; Sakhamanesh, Vahideh
2018-01-01
Background: Dental cone beam computed tomography (CBCT) images suffer from severe metal artifacts. These artifacts degrade the quality of acquired image and in some cases make it unsuitable to use. Streaking artifacts and cavities around teeth are the main reason of degradation. Methods: In this article, we have proposed a new artifact reduction algorithm which has three parallel components. The first component extracts teeth based on the modeling of image histogram with a Gaussian mixture model. Striking artifact reduction component reduces artifacts using converting image into the polar domain and applying morphological filtering. The third component fills cavities through a simple but effective morphological filtering operation. Results: Finally, results of these three components are combined into a fusion step to create a visually good image which is more compatible to human visual system. Conclusions: Results show that the proposed algorithm reduces artifacts of dental CBCT images and produces clean images. PMID:29535920
The effect of phasic auditory alerting on visual perception.
Petersen, Anders; Petersen, Annemarie Hilkjær; Bundesen, Claus; Vangkilde, Signe; Habekost, Thomas
2017-08-01
Phasic alertness refers to a short-lived change in the preparatory state of the cognitive system following an alerting signal. In the present study, we examined the effect of phasic auditory alerting on distinct perceptual processes, unconfounded by motor components. We combined an alerting/no-alerting design with a pure accuracy-based single-letter recognition task. Computational modeling based on Bundesen's Theory of Visual Attention was used to examine the effect of phasic alertness on visual processing speed and threshold of conscious perception. Results show that phasic auditory alertness affects visual perception by increasing the visual processing speed and lowering the threshold of conscious perception (Experiment 1). By manipulating the intensity of the alerting cue, we further observed a positive relationship between alerting intensity and processing speed, which was not seen for the threshold of conscious perception (Experiment 2). This was replicated in a third experiment, in which pupil size was measured as a physiological marker of alertness. Results revealed that the increase in processing speed was accompanied by an increase in pupil size, substantiating the link between alertness and processing speed (Experiment 3). The implications of these results are discussed in relation to a newly developed mathematical model of the relationship between levels of alertness and the speed with which humans process visual information. Copyright © 2017 Elsevier B.V. All rights reserved.
Threat as a feature in visual semantic object memory.
Calley, Clifford S; Motes, Michael A; Chiang, H-Sheng; Buhl, Virginia; Spence, Jeffrey S; Abdi, Hervé; Anand, Raksha; Maguire, Mandy; Estevez, Leonardo; Briggs, Richard; Freeman, Thomas; Kraut, Michael A; Hart, John
2013-08-01
Threatening stimuli have been found to modulate visual processes related to perception and attention. The present functional magnetic resonance imaging (fMRI) study investigated whether threat modulates visual object recognition of man-made and naturally occurring categories of stimuli. Compared with nonthreatening pictures, threatening pictures of real items elicited larger fMRI BOLD signal changes in medial visual cortices extending inferiorly into the temporo-occipital (TO) "what" pathways. This region elicited greater signal changes for threatening items compared to nonthreatening from both the natural-occurring and man-made stimulus supraordinate categories, demonstrating a featural component to these visual processing areas. Two additional loci of signal changes within more lateral inferior TO areas (bilateral BA18 and 19 as well as the right ventral temporal lobe) were detected for a category-feature interaction, with stronger responses to man-made (category) threatening (feature) stimuli than to natural threats. The findings are discussed in terms of visual recognition of processing efficiently or rapidly groups of items that confer an advantage for survival. Copyright © 2012 Wiley Periodicals, Inc.
Integration of today's digital state with tomorrow's visual environment
NASA Astrophysics Data System (ADS)
Fritsche, Dennis R.; Liu, Victor; Markandey, Vishal; Heimbuch, Scott
1996-03-01
New developments in visual communication technologies, and the increasingly digital nature of the industry infrastructure as a whole, are converging to enable new visual environments with an enhanced visual component in interaction, entertainment, and education. New applications and markets can be created, but this depends on the ability of the visual communications industry to provide market solutions that are cost effective and user friendly. Industry-wide cooperation in the development of integrated, open architecture applications enables the realization of such market solutions. This paper describes the work being done by Texas Instruments, in the development of its Digital Light ProcessingTM technology, to support the development of new visual communications technologies and applications.
PONS, FERRAN; ANDREU, LLORENC.; SANZ-TORRENT, MONICA; BUIL-LEGAZ, LUCIA; LEWKOWICZ, DAVID J.
2014-01-01
Speech perception involves the integration of auditory and visual articulatory information and, thus, requires the perception of temporal synchrony between this information. There is evidence that children with Specific Language Impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component followed the visual component. None of the groups perceived an audiovisual asynchrony of 366ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception. PMID:22874648
Pons, Ferran; Andreu, Llorenç; Sanz-Torrent, Monica; Buil-Legaz, Lucía; Lewkowicz, David J
2013-06-01
Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666 ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component preceded [corrected] the visual component. None of the groups perceived an audiovisual asynchrony of 366 ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception.
Working Memory Components as Predictors of Children's Mathematical Word Problem Solving
ERIC Educational Resources Information Center
Zheng, Xinhua; Swanson, H. Lee; Marcoulides, George A.
2011-01-01
This study determined the working memory (WM) components (executive, phonological loop, and visual-spatial sketchpad) that best predicted mathematical word problem-solving accuracy of elementary school children in Grades 2, 3, and 4 (N = 310). A battery of tests was administered to assess problem-solving accuracy, problem-solving processes, WM,…
Visual processing deficits in 22q11.2 Deletion Syndrome.
Biria, Marjan; Tomescu, Miralena I; Custo, Anna; Cantonas, Lucia M; Song, Kun-Wei; Schneider, Maude; Murray, Micah M; Eliez, Stephan; Michel, Christoph M; Rihs, Tonia A
2018-01-01
Carriers of the rare 22q11.2 microdeletion present with a high percentage of positive and negative symptoms and a high genetic risk for schizophrenia. Visual processing impairments have been characterized in schizophrenia, but less so in 22q11.2 Deletion Syndrome (DS). Here, we focus on visual processing using high-density EEG and source imaging in 22q11.2DS participants (N = 25) and healthy controls (N = 26) with an illusory contour discrimination task. Significant differences between groups emerged at early and late stages of visual processing. In 22q11.2DS, we first observed reduced amplitudes over occipital channels and reduced source activations within dorsal and ventral visual stream areas during the P1 (100-125 ms) and within ventral visual cortex during the N1 (150-170 ms) visual evoked components. During a later window implicated in visual completion (240-285 ms), we observed an increase in global amplitudes in 22q11.2DS. The increased surface amplitudes for illusory contours at this window were inversely correlated with positive subscales of prodromal symptoms in 22q11.2DS. The reduced activity of ventral and dorsal visual areas during early stages points to an impairment in visual processing seen both in schizophrenia and 22q11.2DS. During intervals related to perceptual closure, the inverse correlation of high amplitudes with positive symptoms suggests that participants with 22q11.2DS who show an increased brain response to illusory contours during the relevant window for contour processing have less psychotic symptoms and might thus be at a reduced prodromal risk for schizophrenia.
Multicomponent analysis of a digital Trail Making Test.
Fellows, Robert P; Dahmen, Jessamyn; Cook, Diane; Schmitter-Edgecombe, Maureen
2017-01-01
The purpose of the current study was to use a newly developed digital tablet-based variant of the TMT to isolate component cognitive processes underlying TMT performance. Similar to the paper-based trail making test, this digital variant consists of two conditions, Part A and Part B. However, this digital version automatically collects additional data to create component subtest scores to isolate cognitive abilities. Specifically, in addition to the total time to completion and number of errors, the digital Trail Making Test (dTMT) records several unique components including the number of pauses, pause duration, lifts, lift duration, time inside each circle, and time between circles. Participants were community-dwelling older adults who completed a neuropsychological evaluation including measures of processing speed, inhibitory control, visual working memory/sequencing, and set-switching. The abilities underlying TMT performance were assessed through regression analyses of component scores from the dTMT with traditional neuropsychological measures. Results revealed significant correlations between paper and digital variants of Part A (r s = .541, p < .001) and paper and digital versions of Part B (r s = .799, p < .001). Regression analyses with traditional neuropsychological measures revealed that Part A components were best predicted by speeded processing, while inhibitory control and visual/spatial sequencing were predictors of specific components of Part B. Exploratory analyses revealed that specific dTMT-B components were associated with a performance-based medication management task. Taken together, these results elucidate specific cognitive abilities underlying TMT performance, as well as the utility of isolating digital components.
ERIC Educational Resources Information Center
Yeh, Su-Ling; Li, Jing-Ling
2004-01-01
Repetition blindness (RB) refers to the failure to detect the second occurrence of a repeated item in rapid serial visual presentation (RSVP). In two experiments using RSVP, the ability to report two critical characters was found to be impaired when these two characters were identical (Experiment 1) or similar by sharing one repeated component…
Cognitive processing of visual images in migraine populations in between headache attacks.
Mickleborough, Marla J S; Chapman, Christine M; Toma, Andreea S; Handy, Todd C
2014-09-25
People with migraine headache have altered interictal visual sensory-level processing in between headache attacks. Here we examined the extent to which these migraine abnormalities may extend into higher visual processing such as implicit evaluative analysis of visual images in between migraine events. Specifically, we asked two groups of participants--migraineurs (N=29) and non-migraine controls (N=29)--to view a set of unfamiliar commercial logos in the context of a target identification task as the brain electrical responses to these objects were recorded via event-related potentials (ERPs). Following this task, participants individually identified those logos that they most liked or disliked. We applied a between-groups comparison of how ERP responses to logos varied as a function of hedonic evaluation. Our results suggest migraineurs have abnormal implicit evaluative processing of visual stimuli. Specifically, migraineurs lacked a bias for disliked logos found in control subjects, as measured via a late positive potential (LPP) ERP component. These results suggest post-sensory consequences of migraine in between headache events, specifically abnormal cognitive evaluative processing with a lack of normal categorical hedonic evaluation. Copyright © 2014 Elsevier B.V. All rights reserved.
[Cortical potentials evoked to response to a signal to make a memory-guided saccade].
Slavutskaia, M V; Moiseeva, V V; Shul'govskiĭ, V V
2010-01-01
The difference in parameters of visually guided and memory-guided saccades was shown. Increase in the memory-guided saccade latency as compared to that of the visually guided saccades may indicate the deceleration of saccadic programming on the basis of information extraction from the memory. The comparison of parameters and topography of evoked components N1 and P1 of the evoked potential on the signal to make a memory- or visually guided saccade suggests that the early stage of the saccade programming associated with the space information processing is performed predominantly with top-down attention mechanism before the memory-guided saccade and bottom-up mechanism before the visually guided saccade. The findings show that the increase in the latency of the memory-guided saccades is connected with decision making at the central stage of the saccade programming. We proposed that wave N2, which develops in the middle of the latent period of the memory-guided saccades, is correlated with this process. Topography and spatial dynamics of components N1, P1 and N2 testify that the memory-guided saccade programming is controlled by the frontal mediothalamic system of selective attention and left-hemispheric brain mechanisms of motor attention.
Computationally Efficient Clustering of Audio-Visual Meeting Data
NASA Astrophysics Data System (ADS)
Hung, Hayley; Friedland, Gerald; Yeo, Chuohao
This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors, comprising a limited number of cameras and microphones. We first demonstrate computationally efficient algorithms that can identify who spoke and when, a problem in speech processing known as speaker diarization. We also extract visual activity features efficiently from MPEG4 video by taking advantage of the processing that was already done for video compression. Then, we present a method of associating the audio-visual data together so that the content of each participant can be managed individually. The methods presented in this article can be used as a principal component that enables many higher-level semantic analysis tasks needed in search, retrieval, and navigation.
NASA Technical Reports Server (NTRS)
Krauzlis, R. J.; Stone, L. S.
1999-01-01
The two components of voluntary tracking eye-movements in primates, pursuit and saccades, are generally viewed as relatively independent oculomotor subsystems that move the eyes in different ways using independent visual information. Although saccades have long been known to be guided by visual processes related to perception and cognition, only recently have psychophysical and physiological studies provided compelling evidence that pursuit is also guided by such higher-order visual processes, rather than by the raw retinal stimulus. Pursuit and saccades also do not appear to be entirely independent anatomical systems, but involve overlapping neural mechanisms that might be important for coordinating these two types of eye movement during the tracking of a selected visual object. Given that the recovery of objects from real-world images is inherently ambiguous, guiding both pursuit and saccades with perception could represent an explicit strategy for ensuring that these two motor actions are driven by a single visual interpretation.
Sedda, Anna; Petito, Sara; Guarino, Maria; Stracciari, Andrea
2017-07-14
Most of the studies since now show an impairment for facial displays of disgust recognition in Parkinson disease. A general impairment in disgust processing in patients with Parkinson disease might adversely affect their social interactions, given the relevance of this emotion for human relations. However, despite the importance of faces, disgust is also expressed through other format of visual stimuli such as sentences and visual images. The aim of our study was to explore disgust processing in a sample of patients affected by Parkinson disease, by means of various tests tackling not only facial recognition but also other format of visual stimuli through which disgust can be recognized. Our results confirm that patients are impaired in recognizing facial displays of disgust. Further analyses show that patients are also impaired and slower for other facial expressions, with the only exception of happiness. Notably however, patients with Parkinson disease processed visual images and sentences as controls. Our findings show a dissociation within different formats of visual stimuli of disgust, suggesting that Parkinson disease is not characterized by a general compromising of disgust processing, as often suggested. The involvement of the basal ganglia-frontal cortex system might spare some cognitive components of emotional processing, related to memory and culture, at least for disgust. Copyright © 2017 Elsevier B.V. All rights reserved.
Models of Speed Discrimination
NASA Technical Reports Server (NTRS)
1997-01-01
The prime purpose of this project was to investigate various theoretical issues concerning the integration of information across visual space. To date, most of the research efforts in the study of the visual system seem to have been focused in two almost non-overlaping directions. One research focus has been the low level perception as studied by psychophysics. The other focus has been the study of high level vision exemplified by the study of object perception. Most of the effort in psychophysics has been devoted to the search for the fundamental "features" of perception. The general idea is that the most peripheral processes of the visual system decompose the input into features that are then used for classification and recognition. The experimental and theoretical focus has been on finding and describing these analyzers that decompose images into useful components. Various models are then compared to the physiological measurements performed on neurons in the sensory systems. In the study of higher level perception, the work has been focused on the representation of objects and on the connections between various physical effects and object perception. In this category we find the perception of 3D from a variety of physical measurements including motion, shading and other physical phenomena. With few exceptions, there seem to be very limited development of theories describing how the visual system might combine the output of the analyzers to form the representation of visual objects. Therefore, the processes underlying the integration of information over space represent critical aspects of vision system. The understanding of these processes will have implications on our expectations for the underlying physiological mechanisms, as well as for our models of the internal representation for visual percepts. In this project, we explored several mechanisms related to spatial summation, attention, and eye movements. The project comprised three components: 1. Modeling visual search for the detection of speed deviation. 2. Perception of moving objects. 3. Exploring the role of eye movements in various visual tasks.
Bender, Stephan; Behringer, Stephanie; Freitag, Christine M; Resch, Franz; Weisbrod, Matthias
2010-12-01
To elucidate the contributions of modality-dependent post-processing in auditory, motor and visual cortical areas to short-term memory. We compared late negative waves (N700) during the post-processing of single lateralized stimuli which were separated by long intertrial intervals across the auditory, motor and visual modalities. Tasks either required or competed with attention to post-processing of preceding events, i.e. active short-term memory maintenance. N700 indicated that cortical post-processing exceeded short movements as well as short auditory or visual stimuli for over half a second without intentional short-term memory maintenance. Modality-specific topographies pointed towards sensory (respectively motor) generators with comparable time-courses across the different modalities. Lateralization and amplitude of auditory/motor/visual N700 were enhanced by active short-term memory maintenance compared to attention to current perceptions or passive stimulation. The memory-related N700 increase followed the characteristic time-course and modality-specific topography of the N700 without intentional memory-maintenance. Memory-maintenance-related lateralized negative potentials may be related to a less lateralised modality-dependent post-processing N700 component which occurs also without intentional memory maintenance (automatic memory trace or effortless attraction of attention). Encoding to short-term memory may involve controlled attention to modality-dependent post-processing. Similar short-term memory processes may exist in the auditory, motor and visual systems. Copyright © 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Jiang, Yunpeng; Wu, Xia; Saab, Rami; Xiao, Yi; Gao, Xiaorong
2018-05-01
Emotionally affective stimuli have priority in our visual processing even in the absence of conscious processing. However, the influence of unconscious emotional stimuli on our attentional resources remains unclear. Using the continuous flash suppression (CFS) paradigm, we concurrently recorded and analyzed visual event-related potential (ERP) components evoked by the images of suppressed fearful and neutral faces, and the steady-state visual evoked potential (SSVEP) elicited by dynamic Mondrian pictures. Fearful faces, relative to neutral faces, elicited larger late ERP components on parietal electrodes, indicating emotional expression processing without consciousness. More importantly, the presentation of a suppressed fearful face in the CFS resulted in a significantly greater decrease in SSVEP amplitude which started about 1-1.2 s after the face images first appeared. This suggests that the time course of the attentional bias occurs at about 1 s after the appearance of the fearful face and demonstrates that unconscious fearful faces may influence attentional resource allocation. Moreover, we proposed a new method that could eliminate the interaction of ERPs and SSVEPs when recorded concurrently. Copyright © 2018 Elsevier Ltd. All rights reserved.
Affective ERP Processing in a Visual Oddball Task: Arousal, Valence, and Gender
Rozenkrants, Bella; Polich, John
2008-01-01
Objective To assess affective event-related brain potentials (ERPs) using visual pictures that were highly distinct on arousal level/valence category ratings and a response task. Methods Images from the International Affective Pictures System (IAPS) were selected to obtain distinct affective arousal (low, high) and valence (negative, positive) rating levels. The pictures were used as target stimuli in an oddball paradigm, with a visual pattern as the standard stimulus. Participants were instructed to press a button whenever a picture occurred and to ignore the standard. Task performance and response time did not differ across conditions. Results High-arousal compared to low-arousal stimuli produced larger amplitudes for the N2, P3, early slow wave, and late slow wave components. Valence amplitude effects were weak overall and originated primarily from the later waveform components and interactions with electrode position. Gender differences were negligible. Conclusion The findings suggest that arousal level is the primary determinant of affective oddball processing, and valence minimally influences ERP amplitude. Significance Affective processing engages selective attentional mechanisms that are primarily sensitive to the arousal properties of emotional stimuli. The application and nature of task demands are important considerations for interpreting these effects. PMID:18783987
Visualization and Measurement of Multiple Components of the Autophagy Flux.
Evans, Tracey; Button, Robert; Anichtchik, Oleg; Luo, Shouqing
2018-06-24
Autophagy is an intracellular degradation process that mediates the clearance of cytoplasmic components. As well as being an important function for cellular homeostasis, autophagy also promotes the removal of aberrant protein accumulations, such as those seen in conditions like neurodegeneration. The dynamic nature of autophagy requires precise methods to examine the process at multiple stages. The protocols described herein enable the dissection of the complete autophagy process (the "autophagy flux"). These allow for the elucidation of which stages of autophagy may be altered in response to various diseases and treatments.
Semantics of the visual environment encoded in parahippocampal cortex
Bonner, Michael F.; Price, Amy Rose; Peelle, Jonathan E.; Grossman, Murray
2016-01-01
Semantic representations capture the statistics of experience and store this information in memory. A fundamental component of this memory system is knowledge of the visual environment, including knowledge of objects and their associations. Visual semantic information underlies a range of behaviors, from perceptual categorization to cognitive processes such as language and reasoning. Here we examine the neuroanatomic system that encodes visual semantics. Across three experiments, we found converging evidence indicating that knowledge of verbally mediated visual concepts relies on information encoded in a region of the ventral-medial temporal lobe centered on parahippocampal cortex. In an fMRI study, this region was strongly engaged by the processing of concepts relying on visual knowledge but not by concepts relying on other sensory modalities. In a study of patients with the semantic variant of primary progressive aphasia (semantic dementia), atrophy that encompassed this region was associated with a specific impairment in verbally mediated visual semantic knowledge. Finally, in a structural study of healthy adults from the fMRI experiment, gray matter density in this region related to individual variability in the processing of visual concepts. The anatomic location of these findings aligns with recent work linking the ventral-medial temporal lobe with high-level visual representation, contextual associations, and reasoning through imagination. Together this work suggests a critical role for parahippocampal cortex in linking the visual environment with knowledge systems in the human brain. PMID:26679216
Semantics of the Visual Environment Encoded in Parahippocampal Cortex.
Bonner, Michael F; Price, Amy Rose; Peelle, Jonathan E; Grossman, Murray
2016-03-01
Semantic representations capture the statistics of experience and store this information in memory. A fundamental component of this memory system is knowledge of the visual environment, including knowledge of objects and their associations. Visual semantic information underlies a range of behaviors, from perceptual categorization to cognitive processes such as language and reasoning. Here we examine the neuroanatomic system that encodes visual semantics. Across three experiments, we found converging evidence indicating that knowledge of verbally mediated visual concepts relies on information encoded in a region of the ventral-medial temporal lobe centered on parahippocampal cortex. In an fMRI study, this region was strongly engaged by the processing of concepts relying on visual knowledge but not by concepts relying on other sensory modalities. In a study of patients with the semantic variant of primary progressive aphasia (semantic dementia), atrophy that encompassed this region was associated with a specific impairment in verbally mediated visual semantic knowledge. Finally, in a structural study of healthy adults from the fMRI experiment, gray matter density in this region related to individual variability in the processing of visual concepts. The anatomic location of these findings aligns with recent work linking the ventral-medial temporal lobe with high-level visual representation, contextual associations, and reasoning through imagination. Together, this work suggests a critical role for parahippocampal cortex in linking the visual environment with knowledge systems in the human brain.
Alvarez, George A.; Cavanagh, Patrick
2014-01-01
It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. PMID:25164651
Neural oscillatory deficits in schizophrenia predict behavioral and neurocognitive impairments
Martínez, Antígona; Gaspar, Pablo A.; Hillyard, Steven A.; Bickel, Stephan; Lakatos, Peter; Dias, Elisa C.; Javitt, Daniel C.
2015-01-01
Paying attention to visual stimuli is typically accompanied by event-related desynchronizations (ERD) of ongoing alpha (7–14 Hz) activity in visual cortex. The present study used time-frequency based analyses to investigate the role of impaired alpha ERD in visual processing deficits in schizophrenia (Sz). Subjects viewed sinusoidal gratings of high (HSF) and low (LSF) spatial frequency (SF) designed to test functioning of the parvo- vs. magnocellular pathways, respectively. Patients with Sz and healthy controls paid attention selectively to either the LSF or HSF gratings which were presented in random order. Event-related brain potentials (ERPs) were recorded to all stimuli. As in our previous study, it was found that Sz patients were selectively impaired at detecting LSF target stimuli and that ERP amplitudes to LSF stimuli were diminished, both for the early sensory-evoked components and for the attend minus unattend difference component (the Selection Negativity), which is generally regarded as a specific index of feature-selective attention. In the time-frequency domain, the differential ERP deficits to LSF stimuli were echoed in a virtually absent theta-band phase locked response to both unattended and attended LSF stimuli (along with relatively intact theta-band activity for HSF stimuli). In contrast to the theta-band evoked responses which were tightly stimulus locked, stimulus-induced desynchronizations of ongoing alpha activity were not tightly stimulus locked and were apparent only in induced power analyses. Sz patients were significantly impaired in the attention-related modulation of ongoing alpha activity for both HSF and LSF stimuli. These deficits correlated with patients’ behavioral deficits in visual information processing as well as with visually based neurocognitive deficits. These findings suggest an additional, pathway-independent, mechanism by which deficits in early visual processing contribute to overall cognitive impairment in Sz. PMID:26190988
Fernandez-Ricaud, Luciano; Kourtchenko, Olga; Zackrisson, Martin; Warringer, Jonas; Blomberg, Anders
2016-06-23
Phenomics is a field in functional genomics that records variation in organismal phenotypes in the genetic, epigenetic or environmental context at a massive scale. For microbes, the key phenotype is the growth in population size because it contains information that is directly linked to fitness. Due to technical innovations and extensive automation our capacity to record complex and dynamic microbial growth data is rapidly outpacing our capacity to dissect and visualize this data and extract the fitness components it contains, hampering progress in all fields of microbiology. To automate visualization, analysis and exploration of complex and highly resolved microbial growth data as well as standardized extraction of the fitness components it contains, we developed the software PRECOG (PREsentation and Characterization Of Growth-data). PRECOG allows the user to quality control, interact with and evaluate microbial growth data with ease, speed and accuracy, also in cases of non-standard growth dynamics. Quality indices filter high- from low-quality growth experiments, reducing false positives. The pre-processing filters in PRECOG are computationally inexpensive and yet functionally comparable to more complex neural network procedures. We provide examples where data calibration, project design and feature extraction methodologies have a clear impact on the estimated growth traits, emphasising the need for proper standardization in data analysis. PRECOG is a tool that streamlines growth data pre-processing, phenotypic trait extraction, visualization, distribution and the creation of vast and informative phenomics databases.
Romei, Vincenzo; Thut, Gregor; Mok, Robert M; Schyns, Philippe G; Driver, Jon
2012-03-01
Although oscillatory activity in the alpha band was traditionally associated with lack of alertness, more recent work has linked it to specific cognitive functions, including visual attention. The emerging method of rhythmic transcranial magnetic stimulation (TMS) allows causal interventional tests for the online impact on performance of TMS administered in short bursts at a particular frequency. TMS bursts at 10 Hz have recently been shown to have an impact on spatial visual attention, but any role in featural attention remains unclear. Here we used rhythmic TMS at 10 Hz to assess the impact on attending to global or local components of a hierarchical Navon-like stimulus (D. Navon (1977) Forest before trees: The precedence of global features in visual perception. Cognit. Psychol., 9, 353), in a paradigm recently used with TMS at other frequencies (V. Romei, J. Driver, P.G. Schyns & G. Thut. (2011) Rhythmic TMS over parietal cortex links distinct brain frequencies to global versus local visual processing. Curr. Biol., 2, 334-337). In separate groups, left or right posterior parietal sites were stimulated at 10 Hz just before presentation of the hierarchical stimulus. Participants had to identify either the local or global component in separate blocks. Right parietal 10 Hz stimulation (vs. sham) significantly impaired global processing without affecting local processing, while left parietal 10 Hz stimulation vs. sham impaired local processing with a minor trend to enhance global processing. These 10 Hz outcomes differed significantly from stimulation at other frequencies (i.e. 5 or 20 Hz) over the same site in other recent work with the same paradigm. These dissociations confirm differential roles of the two hemispheres in local vs. global processing, and reveal a frequency-specific role for stimulation in the alpha band for regulating feature-based visual attention. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Cognitive Processes that Underlie Mathematical Precociousness in Young Children
ERIC Educational Resources Information Center
Swanson, H. Lee
2006-01-01
The working memory (WM) processes that underlie young children's (ages 6-8 years) mathematical precociousness were examined. A battery of tests that assessed components of WM (phonological loop, visual-spatial sketchpad, and central executive), naming speed, random generation, and fluency was administered to mathematically precocious and…
Sex-Linked Characteristics of Brain Functioning: Why Jimmy Reads Differently.
ERIC Educational Resources Information Center
Helfeldt, John P.
1983-01-01
Presents evidence to support the premise that boys reflect a predilection to process information visually, while girls reflect a preference to process information auditorally. Cautions against relying on isolated components such as hemispheric dominance or laterality during the identification and correction of reading problems. (FL)
Oscillatory frontal theta responses are increased upon bisensory stimulation.
Sakowitz, O W; Schürmann, M; Başar, E
2000-05-01
To investigate the functional correlation of oscillatory EEG components with the interaction of sensory modalities following simultaneous audio-visual stimulation. In an experimental study (15 subjects) we compared auditory evoked potentials (AEPs) and visual evoked potentials (VEPs) to bimodal evoked potentials (BEPs; simultaneous auditory and visual stimulation). BEPs were assumed to be brain responses to complex stimuli as a marker for intermodal associative functioning. Frequency domain analysis of these EPs showed marked theta-range components in response to bimodal stimulation. These theta components could not be explained by linear addition of the unimodal responses in the time domain. Considering topography the increased theta-response showed a remarkable frontality in proximity to multimodal association cortices. Referring to methodology we try to demonstrate that, even if various behavioral correlates of brain oscillations exist, common patterns can be extracted by means of a systems-theoretical approach. Serving as an example of functionally relevant brain oscillations, theta responses could be interpreted as an indicator of associative information processing.
Domain specificity versus expertise: factors influencing distinct processing of faces.
Carmel, David; Bentin, Shlomo
2002-02-01
To explore face specificity in visual processing, we compared the role of task-associated strategies and expertise on the N170 event-related potential (ERP) component elicited by human faces with the ERPs elicited by cars, birds, items of furniture, and ape faces. In Experiment 1, participants performed a car monitoring task and an animacy decision task. In Experiment 2, participants monitored human faces while faces of apes were the distracters. Faces elicited an equally conspicuous N170, significantly larger than the ERPs elicited by non-face categories regardless of whether they were ignored or had an equal status with other categories (Experiment 1), or were the targets (in Experiment 2). In contrast, the negative component elicited by cars during the same time range was larger if they were targets than if they were not. Furthermore, unlike the posterior-temporal distribution of the N170, the negative component elicited by cars and its modulation by task were more conspicuous at occipital sites. Faces of apes elicited an N170 that was similar in amplitude to that elicited by the human face targets, albeit peaking 10 ms later. As our participants were not ape experts, this pattern indicates that the N170 is face-specific, but not specie-specific, i.e. it is elicited by particular face features regardless of expertise. Overall, these results demonstrate the domain specificity of the visual mechanism implicated in processing faces, a mechanism which is not influenced by either task or expertise. The processing of other objects is probably accomplished by a more general visual processor, which is sensitive to strategic manipulations and attention.
46 CFR 160.176-13 - Approval Tests.
Code of Federal Regulations, 2011 CFR
2011-10-01
... thread count must be at least 400 N (90 lb.). (v) [Reserved] (w) Visual examination. One complete... check the quality of incoming lifejacket components and the production process. Test samples must come...
46 CFR 160.176-13 - Approval Tests.
Code of Federal Regulations, 2013 CFR
2013-10-01
... thread count must be at least 400 N (90 lb.). (v) [Reserved] (w) Visual examination. One complete... check the quality of incoming lifejacket components and the production process. Test samples must come...
46 CFR 160.176-13 - Approval Tests.
Code of Federal Regulations, 2012 CFR
2012-10-01
... thread count must be at least 400 N (90 lb.). (v) [Reserved] (w) Visual examination. One complete... check the quality of incoming lifejacket components and the production process. Test samples must come...
46 CFR 160.176-13 - Approval Tests.
Code of Federal Regulations, 2014 CFR
2014-10-01
... thread count must be at least 400 N (90 lb.). (v) [Reserved] (w) Visual examination. One complete... check the quality of incoming lifejacket components and the production process. Test samples must come...
Proline and COMT Status Affect Visual Connectivity in Children with 22q11.2 Deletion Syndrome
Magnée, Maurice J. C. M.; Lamme, Victor A. F.; de Sain-van der Velden, Monique G. M.; Vorstman, Jacob A. S.; Kemner, Chantal
2011-01-01
Background Individuals with the 22q11.2 deletion syndrome (22q11DS) are at increased risk for schizophrenia and Autism Spectrum Disorders (ASDs). Given the prevalence of visual processing deficits in these three disorders, a causal relationship between genes in the deleted region of chromosome 22 and visual processing is likely. Therefore, 22q11DS may represent a unique model to understand the neurobiology of visual processing deficits related with ASD and psychosis. Methodology We measured Event-Related Potentials (ERPs) during a texture segregation task in 58 children with 22q11DS and 100 age-matched controls. The C1 component was used to index afferent activity of visual cortex area V1; the texture negativity wave provided a measure for the integrity of recurrent connections in the visual cortical system. COMT genotype and plasma proline levels were assessed in 22q11DS individuals. Principal Findings Children with 22q11DS showed enhanced feedforward activity starting from 70 ms after visual presentation. ERP activity related to visual feedback activity was reduced in the 22q11DS group, which was seen as less texture negativity around 150 ms post presentation. Within the 22q11DS group we further demonstrated an association between high plasma proline levels and aberrant feedback/feedforward ratios, which was moderated by the COMT 158 genotype. Conclusions These findings confirm the presence of early visual processing deficits in 22q11DS. We discuss these in terms of dysfunctional synaptic plasticity in early visual processing areas, possibly associated with deviant dopaminergic and glutamatergic transmission. As such, our findings may serve as a promising biomarker related to the development of schizophrenia among 22q11DS individuals. PMID:21998713
Sun, Peng; Zhong, Liyun; Luo, Chunshu; Niu, Wenhu; Lu, Xiaoxu
2015-07-16
To perform the visual measurement of the evaporation process of a sessile droplet, a dual-channel simultaneous phase-shifting interferometry (DCSPSI) method is proposed. Based on polarization components to simultaneously generate a pair of orthogonal interferograms with the phase shifts of π/2, the real-time phase of a dynamic process can be retrieved with two-step phase-shifting algorithm. Using this proposed DCSPSI system, the transient mass (TM) of the evaporation process of a sessile droplet with different initial mass were presented through measuring the real-time 3D shape of a droplet. Moreover, the mass flux density (MFD) of the evaporating droplet and its regional distribution were also calculated and analyzed. The experimental results show that the proposed DCSPSI will supply a visual, accurate, noncontact, nondestructive, global tool for the real-time multi-parameter measurement of the droplet evaporation.
McElree, Brian; Carrasco, Marisa
2012-01-01
Feature and conjunction searches have been argued to delineate parallel and serial operations in visual processing. The authors evaluated this claim by examining the temporal dynamics of the detection of features and conjunctions. The 1st experiment used a reaction time (RT) task to replicate standard mean RT patterns and to examine the shapes of the RT distributions. The 2nd experiment used the response-signal speed–accuracy trade-off (SAT) procedure to measure discrimination (asymptotic detection accuracy) and detection speed (processing dynamics). Set size affected discrimination in both feature and conjunction searches but affected detection speed only in the latter. Fits of models to the SAT data that included a serial component overpredicted the magnitude of the observed dynamics differences. The authors concluded that both features and conjunctions are detected in parallel. Implications for the role of attention in visual processing are discussed. PMID:10641310
Shichinohe, Natsuko; Akao, Teppei; Kurkin, Sergei; Fukushima, Junko; Kaneko, Chris R S; Fukushima, Kikuro
2009-06-11
Cortical motor areas are thought to contribute "higher-order processing," but what that processing might include is unknown. Previous studies of the smooth pursuit-related discharge of supplementary eye field (SEF) neurons have not distinguished activity associated with the preparation for pursuit from discharge related to processing or memory of the target motion signals. Using a memory-based task designed to separate these components, we show that the SEF contains signals coding retinal image-slip-velocity, memory, and assessment of visual motion direction, the decision of whether to pursue, and the preparation for pursuit eye movements. Bilateral muscimol injection into SEF resulted in directional errors in smooth pursuit, errors of whether to pursue, and impairment of initial correct eye movements. These results suggest an important role for the SEF in memory and assessment of visual motion direction and the programming of appropriate pursuit eye movements.
Donohue, Sarah E; Todisco, Alexandra E; Woldorff, Marty G
2013-04-01
Neuroimaging work on multisensory conflict suggests that the relevant modality receives enhanced processing in the face of incongruency. However, the degree of stimulus processing in the irrelevant modality and the temporal cascade of the attentional modulations in either the relevant or irrelevant modalities are unknown. Here, we employed an audiovisual conflict paradigm with a sensory probe in the task-irrelevant modality (vision) to gauge the attentional allocation to that modality. ERPs were recorded as participants attended to and discriminated spoken auditory letters while ignoring simultaneous bilateral visual letter stimuli that were either fully congruent, fully incongruent, or partially incongruent (one side incongruent, one congruent) with the auditory stimulation. Half of the audiovisual letter stimuli were followed 500-700 msec later by a bilateral visual probe stimulus. As expected, ERPs to the audiovisual stimuli showed an incongruency ERP effect (fully incongruent versus fully congruent) of an enhanced, centrally distributed, negative-polarity wave starting ∼250 msec. More critically here, the sensory ERP components to the visual probes were larger when they followed fully incongruent versus fully congruent multisensory stimuli, with these enhancements greatest on fully incongruent trials with the slowest RTs. In addition, on the slowest-response partially incongruent trials, the P2 sensory component to the visual probes was larger contralateral to the preceding incongruent visual stimulus. These data suggest that, in response to conflicting multisensory stimulus input, the initial cognitive effect is a capture of attention by the incongruent irrelevant-modality input, pulling neural processing resources toward that modality, resulting in rapid enhancement, rather than rapid suppression, of that input.
Humphries, Colin; Desai, Rutvik H.; Seidenberg, Mark S.; Osmon, David C.; Stengel, Ben C.; Binder, Jeffrey R.
2013-01-01
Although the left posterior occipitotemporal sulcus (pOTS) has been called a visual word form area, debate persists over the selectivity of this region for reading relative to general nonorthographic visual object processing. We used high-resolution functional magnetic resonance imaging to study left pOTS responses to combinatorial orthographic and object shape information. Participants performed naming and visual discrimination tasks designed to encourage or suppress phonological encoding. During the naming task, all participants showed subregions within left pOTS that were more sensitive to combinatorial orthographic information than to object information. This difference disappeared, however, when phonological processing demands were removed. Responses were stronger to pseudowords than to words, but this effect also disappeared when phonological processing demands were removed. Subregions within the left pOTS are preferentially activated when visual input must be mapped to a phonological representation (i.e., a name) and particularly when component parts of the visual input must be mapped to corresponding phonological elements (consonant or vowel phonemes). Results indicate a specialized role for subregions within the left pOTS in the isomorphic mapping of familiar combinatorial visual patterns to phonological forms. This process distinguishes reading from picture naming and accounts for a wide range of previously reported stimulus and task effects in left pOTS. PMID:22505661
Trillenberg, Peter; Sprenger, Andreas; Talamo, Silke; Herold, Kirsten; Helmchen, Christoph; Verleger, Rolf; Lencer, Rebekka
2017-04-01
Despite many reports on visual processing deficits in psychotic disorders, studies are needed on the integration of visual and non-visual components of eye movement control to improve the understanding of sensorimotor information processing in these disorders. Non-visual inputs to eye movement control include prediction of future target velocity from extrapolation of past visual target movement and anticipation of future target movements. It is unclear whether non-visual input is impaired in patients with schizophrenia. We recorded smooth pursuit eye movements in 21 patients with schizophrenia spectrum disorder, 22 patients with bipolar disorder, and 24 controls. In a foveo-fugal ramp task, the target was either continuously visible or was blanked during movement. We determined peak gain (measuring overall performance), initial eye acceleration (measuring visually driven pursuit), deceleration after target extinction (measuring prediction), eye velocity drifts before onset of target visibility (measuring anticipation), and residual gain during blanking intervals (measuring anticipation and prediction). In both patient groups, initial eye acceleration was decreased and the ability to adjust eye acceleration to increasing target acceleration was impaired. In contrast, neither deceleration nor eye drift velocity was reduced in patients, implying unimpaired non-visual contributions to pursuit drive. Disturbances of eye movement control in psychotic disorders appear to be a consequence of deficits in sensorimotor transformation rather than a pure failure in adding cognitive contributions to pursuit drive in higher-order cortical circuits. More generally, this deficit might reflect a fundamental imbalance between processing external input and acting according to internal preferences.
Problem Solving in Electricity.
ERIC Educational Resources Information Center
Caillot, Michel; Chalouhi, Elias
Two studies were conducted to describe how students perform direct current (D-C) circuit problems. It was hypothesized that problem solving in the electricity domain depends largely on good visual processing of the circuit diagram and that this processing depends on the ability to recognize when two or more electrical components are in series or…
Postural and Cortical Responses Following Visual Occlusion in Adults with and without ASD
ERIC Educational Resources Information Center
Goh, Kwang Leng; Morris, Susan; Parsons, Richard; Ring, Alexander; Tan, Tele
2018-01-01
Autism is associated with differences in sensory processing and motor coordination. Evidence from electroencephalography suggests individual perturbation evoked response (PER) components represent specific aspects of postural disturbance processing; P1 reflects the detection and N1 reflects the evaluation of postural instability. Despite the…
Stenneken, Prisca; Egetemeir, Johanna; Schulte-Körne, Gerd; Müller, Hermann J; Schneider, Werner X; Finke, Kathrin
2011-10-01
The cognitive causes as well as the neurological and genetic basis of developmental dyslexia, a complex disorder of written language acquisition, are intensely discussed with regard to multiple-deficit models. Accumulating evidence has revealed dyslexics' impairments in a variety of tasks requiring visual attention. The heterogeneity of these experimental results, however, points to the need for measures that are sufficiently sensitive to differentiate between impaired and preserved attentional components within a unified framework. This first parameter-based group study of attentional components in developmental dyslexia addresses potentially altered attentional components that have recently been associated with parietal dysfunctions in dyslexia. We aimed to isolate the general attentional resources that might underlie reduced span performance, i.e., either a deficient working memory storage capacity, or a slowing in visual perceptual processing speed, or both. Furthermore, by analysing attentional selectivity in dyslexia, we addressed a potential lateralized abnormality of visual attention, i.e., a previously suggested rightward spatial deviation compared to normal readers. We investigated a group of high-achieving young adults with persisting dyslexia and matched normal readers in an experimental whole report and a partial report of briefly presented letter arrays. Possible deviations in the parametric values of the dyslexic compared to the control group were taken as markers for the underlying deficit. The dyslexic group showed a striking reduction in perceptual processing speed (by 26% compared to controls) while their working memory storage capacity was in the normal range. In addition, a spatial deviation of attentional weighting compared to the control group was confirmed in dyslexic readers, which was larger in participants with a more severe dyslexic disorder. In general, the present study supports the relevance of perceptual processing speed in disorders of written language acquisition and demonstrates that the parametric assessment provides a suitable tool for specifying the underlying deficit within a unitary framework. Copyright © 2011 Elsevier Ltd. All rights reserved.
Vadnais, Sarah A; Kibby, Michelle Y; Jagger-Rickels, Audreyana C
2018-01-01
We identified statistical predictors of four processing speed (PS) components in a sample of 151 children with and without attention-deficit/hyperactivity disorder (ADHD). Performance on perceptual speed was predicted by visual attention/short-term memory, whereas incidental learning/psychomotor speed was predicted by verbal working memory. Rapid naming was predictive of each PS component assessed, and inhibition predicted all but one task, suggesting a shared need to identify/retrieve stimuli rapidly and inhibit incorrect responding across PS components. Hence, we found both shared and unique predictors of perceptual, cognitive, and output speed, suggesting more specific terminology should be used in future research on PS in ADHD.
Rolke, Bettina; Festl, Freya; Seibold, Verena C
2016-11-01
We used ERPs to investigate whether temporal attention interacts with spatial attention and feature-based attention to enhance visual processing. We presented a visual search display containing one singleton stimulus among a set of homogenous distractors. Participants were asked to respond only to target singletons of a particular color and shape that were presented in an attended spatial position. We manipulated temporal attention by presenting a warning signal before each search display and varying the foreperiod (FP) between the warning signal and the search display in a blocked manner. We observed distinctive ERP effects of both spatial and temporal attention. The amplitudes for the N2pc, SPCN, and P3 were enhanced by spatial attention indicating a processing benefit of relevant stimulus features at the attended side. Temporal attention accelerated stimulus processing; this was indexed by an earlier onset of the N2pc component and a reduction in reaction times to targets. Most importantly, temporal attention did not interact with spatial attention or stimulus features to influence visual processing. Taken together, the results suggest that temporal attention fosters visual perceptual processing in a visual search task independently from spatial attention and feature-based attention; this provides support for the nonspecific enhancement hypothesis of temporal attention. © 2016 Society for Psychophysiological Research.
Pannebakker, Merel M; Jolicœur, Pierre; van Dam, Wessel O; Band, Guido P H; Ridderinkhof, K Richard; Hommel, Bernhard
2011-09-01
Dual tasks and their associated delays have often been used to examine the boundaries of processing in the brain. We used the dual-task procedure and recorded event-related potentials (ERPs) to investigate how mental rotation of a first stimulus (S1) influences the shifting of visual-spatial attention to a second stimulus (S2). Visual-spatial attention was monitored by using the N2pc component of the ERP. In addition, we examined the sustained posterior contralateral negativity (SPCN) believed to index the retention of information in visual short-term memory. We found modulations of both the N2pc and the SPCN, suggesting that engaging mechanisms of mental rotation impairs the deployment of visual-spatial attention and delays the passage of a representation of S2 into visual short-term memory. Both results suggest interactions between mental rotation and visual-spatial attention in capacity-limited processing mechanisms indicating that response selection is not pivotal in dual-task delays and all three processes are likely to share a common resource like executive control. Copyright © 2011 Elsevier Ltd. All rights reserved.
The relationship of global form and motion detection to reading fluency.
Englund, Julia A; Palomares, Melanie
2012-08-15
Visual motion processing in typical and atypical readers has suggested aspects of reading and motion processing share a common cortical network rooted in dorsal visual areas. Few studies have examined the relationship between reading performance and visual form processing, which is mediated by ventral cortical areas. We investigated whether reading fluency correlates with coherent motion detection thresholds in typically developing children using random dot kinematograms. As a comparison, we also evaluated the correlation between reading fluency and static form detection thresholds. Results show that both dorsal and ventral visual functions correlated with components of reading fluency, but that they have different developmental characteristics. Motion coherence thresholds correlated with reading rate and accuracy, which both improved with chronological age. Interestingly, when controlling for non-verbal abilities and age, reading accuracy significantly correlated with thresholds for coherent form detection but not coherent motion detection in typically developing children. Dorsal visual functions that mediate motion coherence seem to be related maturation of broad cognitive functions including non-verbal abilities and reading fluency. However, ventral visual functions that mediate form coherence seem to be specifically related to accurate reading in typically developing children. Copyright © 2012 Elsevier Ltd. All rights reserved.
Black–white asymmetry in visual perception
Lu, Zhong-Lin; Sperling, George
2012-01-01
With eleven different types of stimuli that exercise a wide gamut of spatial and temporal visual processes, negative perturbations from mean luminance are found to be typically 25% more effective visually than positive perturbations of the same magnitude (range 8–67%). In Experiment 12, the magnitude of the black–white asymmetry is shown to be a saturating function of stimulus contrast. Experiment 13 shows black–white asymmetry primarily involves a nonlinearity in the visual representation of decrements. Black–white asymmetry in early visual processing produces even-harmonic distortion frequencies in all ordinary stimuli and in illusions such as the perceived asymmetry of optically perfect sine wave gratings. In stimuli intended to stimulate exclusively second-order processing in which motion or shape are defined not by luminance differences but by differences in texture contrast, the black–white asymmetry typically generates artifactual luminance (first-order) motion and shape components. Because black–white asymmetry pervades psychophysical and neurophysiological procedures that utilize spatial or temporal variations of luminance, it frequently needs to be considered in the design and evaluation of experiments that involve visual stimuli. Simple procedures to compensate for black–white asymmetry are proposed. PMID:22984221
Cognitive processes facilitated by contextual cueing: evidence from event-related brain potentials.
Schankin, Andrea; Schubö, Anna
2009-05-01
Finding a target in repeated search displays is faster than finding the same target in novel ones (contextual cueing). It is assumed that the visual context (the arrangement of the distracting objects) is used to guide attention efficiently to the target location. Alternatively, other factors, e.g., facilitation in early visual processing or in response selection, may play a role as well. In a contextual cueing experiment, participant's electrophysiological brain activity was recorded. Participants identified the target faster and more accurately in repeatedly presented displays. In this condition, the N2pc, a component reflecting the allocation of visual-spatial attention, was enhanced, indicating that attention was allocated more efficiently to those targets. However, also response-related processes, reflected by the LRP, were facilitated, indicating that guidance of attention cannot account for the entire contextual cueing benefit.
Visual imagery without visual perception: lessons from blind subjects
NASA Astrophysics Data System (ADS)
Bértolo, Helder
2014-08-01
The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review some of the works providing evidence for both claims. It seems that studying visual imagery in blind subjects can be used as a way of answering some of those questions, namely if it is possible to have visual imagery without visual perception. We present results from the work of our group using visual activation in dreams and its relation with EEG's spectral components, showing that congenitally blind have visual contents in their dreams and are able to draw them; furthermore their Visual Activation Index is negatively correlated with EEG alpha power. This study supports the hypothesis that it is possible to have visual imagery without visual experience.
Learning to Recognize Patterns: Changes in the Visual Field with Familiarity
NASA Astrophysics Data System (ADS)
Bebko, James M.; Uchikawa, Keiji; Saida, Shinya; Ikeda, Mitsuo
1995-01-01
Two studies were conducted to investigate changes which take place in the visual information processing of novel stimuli as they become familiar. Japanese writing characters (Hiragana and Kanji) which were unfamiliar to two native English speaking subjects were presented using a moving window technique to restrict their visual fields. Study time for visual recognition was recorded across repeated sessions, and with varying visual field restrictions. The critical visual field was defined as the size of the visual field beyond which further increases did not improve the speed of recognition performance. In the first study, when the Hiragana patterns were novel, subjects needed to see about half of the entire pattern simultaneously to maintain optimal performance. However, the critical visual field size decreased as familiarity with the patterns increased. These results were replicated in the second study with more complex Kanji characters. In addition, the critical field size decreased as pattern complexity decreased. We propose a three component model of pattern perception. In the first stage a representation of the stimulus must be constructed by the subject, and restricting of the visual field interferes dramatically with this component when stimuli are unfamiliar. With increased familiarity, subjects become able to reconstruct a previous representation from very small, unique segments of the pattern, analogous to the informativeness areas hypothesized by Loftus and Mackworth [J. Exp. Psychol., 4 (1978) 565].
Evolution and the origin of the visual retinoid cycle in vertebrates.
Kusakabe, Takehiro G; Takimoto, Noriko; Jin, Minghao; Tsuda, Motoyuki
2009-10-12
Absorption of a photon by visual pigments induces isomerization of 11-cis-retinaldehyde (RAL) chromophore to all-trans-RAL. Since the opsins lacking 11-cis-RAL lose light sensitivity, sustained vision requires continuous regeneration of 11-cis-RAL via the process called 'visual cycle'. Protostomes and vertebrates use essentially different machinery of visual pigment regeneration, and the origin and early evolution of the vertebrate visual cycle is an unsolved mystery. Here we compare visual retinoid cycles between different photoreceptors of vertebrates, including rods, cones and non-visual photoreceptors, as well as between vertebrates and invertebrates. The visual cycle systems in ascidians, the closest living relatives of vertebrates, show an intermediate state between vertebrates and non-chordate invertebrates. The ascidian larva may use retinochrome-like opsin as the major isomerase. The entire process of the visual cycle can occur inside the photoreceptor cells with distinct subcellular compartmentalization, although the visual cycle components are also present in surrounding non-photoreceptor cells. The adult ascidian probably uses RPE65 isomerase, and trans-to-cis isomerization may occur in distinct cellular compartments, which is similar to the vertebrate situation. The complete transition to the sophisticated retinoid cycle of vertebrates may have required acquisition of new genes, such as interphotoreceptor retinoid-binding protein, and functional evolution of the visual cycle genes.
Else, Jane E.; Ellis, Jason; Orme, Elizabeth
2015-01-01
Art is one of life’s great joys, whether it is beautiful, ugly, sublime or shocking. Aesthetic responses to visual art involve sensory, cognitive and visceral processes. Neuroimaging studies have yielded a wealth of information regarding aesthetic appreciation and beauty using visual art as stimuli, but few have considered the effect of expertise on visual and visceral responses. To study the time course of visual, cognitive and emotional processes in response to visual art we investigated the event-related potentials (ERPs) elicited whilst viewing and rating the visceral affect of three categories of visual art. Two groups, artists and non-artists viewed representational, abstract and indeterminate 20th century art. Early components, particularly the N1, related to attention and effort, and the P2, linked to higher order visual processing, was enhanced for artists when compared to non-artists. This effect was present for all types of art, but further enhanced for abstract art (AA), which was rated as having lowest visceral affect by the non-artists. The later, slow wave processes (500–1000 ms), associated with arousal and sustained attention, also show clear differences between the two groups in response to both type of art and visceral affect. AA increased arousal and sustained attention in artists, whilst it decreased in non-artists. These results suggest that aesthetic response to visual art is affected by both expertise and semantic content. PMID:27242497
Simultaneous Visualization of Different Utility Networks for Disaster Management
NASA Astrophysics Data System (ADS)
Semm, S.; Becker, T.; Kolbe, T. H.
2012-07-01
Cartographic visualizations of crises are used to create a Common Operational Picture (COP) and enforce Situational Awareness by presenting and representing relevant information. As nearly all crises affect geospatial entities, geo-data representations have to support location-specific decision-making throughout the crises. Since, Operator's attention span and their working memory are limiting factors for the process of getting and interpreting information; the cartographic presentation has to support individuals in coordinating their activities and with handling highly dynamic situations. The Situational Awareness of operators in conjunction with a COP are key aspects of the decision making process and essential for coming to appropriate decisions. Utility networks are one of the most complex and most needed systems within a city. The visualization of utility infrastructure in crisis situations is addressed in this paper. The paper will provide a conceptual approach on how to simplify, aggregate, and visualize multiple utility networks and their components to meet the requirements of the decision-making process and to support Situational Awareness.
Perceptual load influences selective attention across development.
Couperus, Jane W
2011-09-01
Research suggests that visual selective attention develops across childhood. However, there is relatively little understanding of the neurological changes that accompany this development, particularly in the context of adult theories of selective attention, such as N. Lavie's (1995) perceptual load theory of attention. This study examined visual selective attention across development from 7 years of age to adulthood. Specifically, the author examined if changes in processing as a function of selective attention are similarly influenced by perceptual load across development. Participants were asked to complete a task at either low or high perceptual load while processing of an unattended probe stimulus was examined using event related potentials. Similar to adults, children and teens showed reduced processing of the unattended stimulus as perceptual load increased at the P1 visual component. However, although there were no qualitative differences in changes in processing, there were quantitative differences, with shorter P1 latencies in teens and adults compared with children, suggesting increases in the speed of processing across development. In addition, younger children did not need as high a perceptual load to achieve the same difference in performance between low and high perceptual load as adults. Thus, this study demonstrates that although there are developmental changes in visual selective attention, the mechanisms by which visual selective attention is achieved in children may share similarities with adults.
Retention interval affects visual short-term memory encoding.
Bankó, Eva M; Vidnyánszky, Zoltán
2010-03-01
Humans can efficiently store fine-detailed facial emotional information in visual short-term memory for several seconds. However, an unresolved question is whether the same neural mechanisms underlie high-fidelity short-term memory for emotional expressions at different retention intervals. Here we show that retention interval affects the neural processes of short-term memory encoding using a delayed facial emotion discrimination task. The early sensory P100 component of the event-related potentials (ERP) was larger in the 1-s interstimulus interval (ISI) condition than in the 6-s ISI condition, whereas the face-specific N170 component was larger in the longer ISI condition. Furthermore, the memory-related late P3b component of the ERP responses was also modulated by retention interval: it was reduced in the 1-s ISI as compared with the 6-s condition. The present findings cannot be explained based on differences in sensory processing demands or overall task difficulty because there was no difference in the stimulus information and subjects' performance between the two different ISI conditions. These results reveal that encoding processes underlying high-precision short-term memory for facial emotional expressions are modulated depending on whether information has to be stored for one or for several seconds.
Representation of visual symbols in the visual word processing network.
Muayqil, Taim; Davies-Thompson, Jodie; Barton, Jason J S
2015-03-01
Previous studies have shown that word processing involves a predominantly left-sided occipitotemporal network. Words are a form of symbolic representation, in that they are arbitrary perceptual stimuli that represent other objects, actions or concepts. Lesions of parts of the visual word processing network can cause alexia, which can be associated with difficulty processing other types of symbols such as musical notation or road signs. We investigated whether components of the visual word processing network were also activated by other types of symbols. In 16 music-literate subjects, we defined the visual word network using fMRI and examined responses to four symbolic categories: visual words, musical notation, instructive symbols (e.g. traffic signs), and flags and logos. For each category we compared responses not only to scrambled stimuli, but also to similar stimuli that lacked symbolic meaning. The left visual word form area and a homologous right fusiform region responded similarly to all four categories, but equally to both symbolic and non-symbolic equivalents. Greater response to symbolic than non-symbolic stimuli occurred only in the left inferior frontal and middle temporal gyri, but only for words, and in the case of the left inferior frontal gyri, also for musical notation. A whole-brain analysis comparing symbolic versus non-symbolic stimuli revealed a distributed network of inferior temporooccipital and parietal regions that differed for different symbols. The fusiform gyri are involved in processing the form of many symbolic stimuli, but not specifically for stimuli with symbolic content. Selectivity for stimuli with symbolic content only emerges in the visual word network at the level of the middle temporal and inferior frontal gyri, but is specific for words and musical notation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Zhou, Xinlin; Wei, Wei; Zhang, Yiyun; Cui, Jiaxin; Chen, Chuansheng
2015-01-01
Studies have shown that numerosity processing (e.g., comparison of numbers of dots in two dot arrays) is significantly correlated with arithmetic performance. Researchers have attributed this association to the fact that both tasks share magnitude processing. The current investigation tested an alternative hypothesis, which states that visual perceptual ability (as measured by a figure-matching task) can account for the close relation between numerosity processing and arithmetic performance (computational fluency). Four hundred and twenty four third- to fifth-grade children (220 boys and 204 girls, 8.0-11.0 years old; 120 third graders, 146 fourth graders, and 158 fifth graders) were recruited from two schools (one urban and one suburban) in Beijing, China. Six classes were randomly selected from each school, and all students in each selected class participated in the study. All children were given a series of cognitive and mathematical tests, including numerosity comparison, figure matching, forward verbal working memory, visual tracing, non-verbal matrices reasoning, mental rotation, choice reaction time, arithmetic tests and curriculum-based mathematical achievement test. Results showed that figure-matching ability had higher correlations with numerosity processing and computational fluency than did other cognitive factors (e.g., forward verbal working memory, visual tracing, non-verbal matrix reasoning, mental rotation, and choice reaction time). More important, hierarchical multiple regression showed that figure matching ability accounted for the well-established association between numerosity processing and computational fluency. In support of the visual perception hypothesis, the results suggest that visual perceptual ability, rather than magnitude processing, may be the shared component of numerosity processing and arithmetic performance.
Zhou, Xinlin; Wei, Wei; Zhang, Yiyun; Cui, Jiaxin; Chen, Chuansheng
2015-01-01
Studies have shown that numerosity processing (e.g., comparison of numbers of dots in two dot arrays) is significantly correlated with arithmetic performance. Researchers have attributed this association to the fact that both tasks share magnitude processing. The current investigation tested an alternative hypothesis, which states that visual perceptual ability (as measured by a figure-matching task) can account for the close relation between numerosity processing and arithmetic performance (computational fluency). Four hundred and twenty four third- to fifth-grade children (220 boys and 204 girls, 8.0–11.0 years old; 120 third graders, 146 fourth graders, and 158 fifth graders) were recruited from two schools (one urban and one suburban) in Beijing, China. Six classes were randomly selected from each school, and all students in each selected class participated in the study. All children were given a series of cognitive and mathematical tests, including numerosity comparison, figure matching, forward verbal working memory, visual tracing, non-verbal matrices reasoning, mental rotation, choice reaction time, arithmetic tests and curriculum-based mathematical achievement test. Results showed that figure-matching ability had higher correlations with numerosity processing and computational fluency than did other cognitive factors (e.g., forward verbal working memory, visual tracing, non-verbal matrix reasoning, mental rotation, and choice reaction time). More important, hierarchical multiple regression showed that figure matching ability accounted for the well-established association between numerosity processing and computational fluency. In support of the visual perception hypothesis, the results suggest that visual perceptual ability, rather than magnitude processing, may be the shared component of numerosity processing and arithmetic performance. PMID:26441740
Audio-visual speech perception in adult readers with dyslexia: an fMRI study.
Rüsseler, Jascha; Ye, Zheng; Gerth, Ivonne; Szycik, Gregor R; Münte, Thomas F
2018-04-01
Developmental dyslexia is a specific deficit in reading and spelling that often persists into adulthood. In the present study, we used slow event-related fMRI and independent component analysis to identify brain networks involved in perception of audio-visual speech in a group of adult readers with dyslexia (RD) and a group of fluent readers (FR). Participants saw a video of a female speaker saying a disyllabic word. In the congruent condition, audio and video input were identical whereas in the incongruent condition, the two inputs differed. Participants had to respond to occasionally occurring animal names. The independent components analysis (ICA) identified several components that were differently modulated in FR and RD. Two of these components including fusiform gyrus and occipital gyrus showed less activation in RD compared to FR possibly indicating a deficit to extract face information that is needed to integrate auditory and visual information in natural speech perception. A further component centered on the superior temporal sulcus (STS) also exhibited less activation in RD compared to FR. This finding is corroborated in the univariate analysis that shows less activation in STS for RD compared to FR. These findings suggest a general impairment in recruitment of audiovisual processing areas in dyslexia during the perception of natural speech.
Maximally reliable spatial filtering of steady state visual evoked potentials.
Dmochowski, Jacek P; Greaves, Alex S; Norcia, Anthony M
2015-04-01
Due to their high signal-to-noise ratio (SNR) and robustness to artifacts, steady state visual evoked potentials (SSVEPs) are a popular technique for studying neural processing in the human visual system. SSVEPs are conventionally analyzed at individual electrodes or linear combinations of electrodes which maximize some variant of the SNR. Here we exploit the fundamental assumption of evoked responses--reproducibility across trials--to develop a technique that extracts a small number of high SNR, maximally reliable SSVEP components. This novel spatial filtering method operates on an array of Fourier coefficients and projects the data into a low-dimensional space in which the trial-to-trial spectral covariance is maximized. When applied to two sample data sets, the resulting technique recovers physiologically plausible components (i.e., the recovered topographies match the lead fields of the underlying sources) while drastically reducing the dimensionality of the data (i.e., more than 90% of the trial-to-trial reliability is captured in the first four components). Moreover, the proposed technique achieves a higher SNR than that of the single-best electrode or the Principal Components. We provide a freely-available MATLAB implementation of the proposed technique, herein termed "Reliable Components Analysis". Copyright © 2015 Elsevier Inc. All rights reserved.
Liu, Hong; Zhang, Gaoyan; Liu, Baolin
2017-04-01
In the Chinese language, a polyphone is a kind of special character that has more than one pronunciation, with each pronunciation corresponding to a different meaning. Here, we aimed to reveal the cognitive processing of audio-visual information integration of polyphones in a sentence context using the event-related potential (ERP) method. Sentences ending with polyphones were presented to subjects simultaneously in both an auditory and a visual modality. Four experimental conditions were set in which the visual presentations were the same, but the pronunciations of the polyphones were: the correct pronunciation; another pronunciation of the polyphone; a semantically appropriate pronunciation but not the pronunciation of the polyphone; or a semantically inappropriate pronunciation but also not the pronunciation of the polyphone. The behavioral results demonstrated significant differences in response accuracies when judging the semantic meanings of the audio-visual sentences, which reflected the different demands on cognitive resources. The ERP results showed that in the early stage, abnormal pronunciations were represented by the amplitude of the P200 component. Interestingly, because the phonological information mediated access to the lexical semantics, the amplitude and latency of the N400 component changed linearly across conditions, which may reflect the gradually increased semantic mismatch in the four conditions when integrating the auditory pronunciation with the visual information. Moreover, the amplitude of the late positive shift (LPS) showed a significant correlation with the behavioral response accuracies, demonstrating that the LPS component reveals the demand of cognitive resources for monitoring and resolving semantic conflicts when integrating the audio-visual information.
Störmer, Viola S; Alvarez, George A; Cavanagh, Patrick
2014-08-27
It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. Copyright © 2014 the authors 0270-6474/14/3311526-08$15.00/0.
MFV-class: a multi-faceted visualization tool of object classes.
Zhang, Zhi-meng; Pan, Yun-he; Zhuang, Yue-ting
2004-11-01
Classes are key software components in an object-oriented software system. In many industrial OO software systems, there are some classes that have complicated structure and relationships. So in the processes of software maintenance, testing, software reengineering, software reuse and software restructure, it is a challenge for software engineers to understand these classes thoroughly. This paper proposes a class comprehension model based on constructivist learning theory, and implements a software visualization tool (MFV-Class) to help in the comprehension of a class. The tool provides multiple views of class to uncover manifold facets of class contents. It enables visualizing three object-oriented metrics of classes to help users focus on the understanding process. A case study was conducted to evaluate our approach and the toolkit.
Entrainment to the CIECAM02 and CIELAB colour appearance models in the human cortex.
Thwaites, Andrew; Wingfield, Cai; Wieser, Eric; Soltan, Andrew; Marslen-Wilson, William D; Nimmo-Smith, Ian
2018-04-01
In human visual processing, information from the visual field passes through numerous transformations before perceptual attributes such as colour are derived. The sequence of transforms involved in constructing perceptions of colour can be approximated by colour appearance models such as the CIE (2002) colour appearance model, abbreviated as CIECAM02. In this study, we test the plausibility of CIECAM02 as a model of colour processing by looking for evidence of its cortical entrainment. The CIECAM02 model predicts that colour is split in to two opposing chromatic components, red-green and cyan-yellow (termed CIECAM02-a and CIECAM02-b respectively), and an achromatic component (termed CIECAM02-A). Entrainment of cortical activity to the outputs of these components was estimated using measurements of electro- and magnetoencephalographic (EMEG) activity, recorded while healthy subjects watched videos of dots changing colour. We find entrainment to chromatic component CIECAM02-a at approximately 35 ms latency bilaterally in occipital lobe regions, and entrainment to achromatic component CIECAM02-A at approximately 75 ms latency, also bilaterally in occipital regions. For comparison, transforms from a less physiologically plausible model (CIELAB) were also tested, with no significant entrainment found. Copyright © 2018 Elsevier Ltd. All rights reserved.
Spatiotemporal mapping of sex differences during attentional processing.
Neuhaus, Andres H; Opgen-Rhein, Carolin; Urbanek, Carsten; Gross, Melanie; Hahn, Eric; Ta, Thi Minh Tam; Koehler, Simone; Dettling, Michael
2009-09-01
Functional neuroimaging studies have increasingly aimed at approximating neural substrates of human cognitive sex differences elicited by visuospatial challenge. It has been suggested that females and males use different behaviorally relevant neurocognitive strategies. In females, greater right prefrontal cortex activation has been found in several studies. The spatiotemporal dynamics of neural events associated with these sex differences is still unclear. We studied 22 female and 22 male participants matched for age, education, and nicotine with 29-channel-electroencephalogram recorded under a visual selective attention paradigm, the Attention Network Test. Visual event-related potentials (ERP) were topographically analyzed and neuroelectric sources were estimated. In absence of behavioral differences, ERP analysis revealed a novel frontal-occipital second peak of visual N100 that was significantly increased in females relative to males. Further, in females exclusively, a corresponding central ERP component at around 220 ms was found; here, a strong correlation between stimulus salience and sex difference of the central ERP component amplitude was observed. Subsequent source analysis revealed increased cortical current densities in right rostral prefrontal (BA 10) and occipital cortex (BA 19) in female subjects. This is the first study to report on a tripartite association between sex differences in ERPs, visual stimulus salience, and right prefrontal cortex activation during attentional processing. 2009 Wiley-Liss, Inc.
Gullick, Margaret M; Mitra, Priya; Coch, Donna
2013-05-01
Previous event-related potential studies have indicated that both a widespread N400 and an anterior N700 index differential processing of concrete and abstract words, but the nature of these components in relation to concreteness and imagery has been unclear. Here, we separated the effects of word concreteness and task demands on the N400 and N700 in a single word processing paradigm with a within-subjects, between-tasks design and carefully controlled word stimuli. The N400 was larger to concrete words than to abstract words, and larger in the visualization task condition than in the surface task condition, with no interaction. A marked anterior N700 was elicited only by concrete words in the visualization task condition, suggesting that this component indexes imagery. These findings are consistent with a revised or extended dual coding theory according to which concrete words benefit from greater activation in both verbal and imagistic systems. Copyright © 2013 Society for Psychophysiological Research.
Before the N400: effects of lexical-semantic violations in visual cortex.
Dikker, Suzanne; Pylkkanen, Liina
2011-07-01
There exists an increasing body of research demonstrating that language processing is aided by context-based predictions. Recent findings suggest that the brain generates estimates about the likely physical appearance of upcoming words based on syntactic predictions: words that do not physically look like the expected syntactic category show increased amplitudes in the visual M100 component, the first salient MEG response to visual stimulation. This research asks whether violations of predictions based on lexical-semantic information might similarly generate early visual effects. In a picture-noun matching task, we found early visual effects for words that did not accurately describe the preceding pictures. These results demonstrate that, just like syntactic predictions, lexical-semantic predictions can affect early visual processing around ∼100ms, suggesting that the M100 response is not exclusively tuned to recognizing visual features relevant to syntactic category analysis. Rather, the brain might generate predictions about upcoming visual input whenever it can. However, visual effects of lexical-semantic violations only occurred when a single lexical item could be predicted. We argue that this may be due to the fact that in natural language processing, there is typically no straightforward mapping between lexical-semantic fields (e.g., flowers) and visual or auditory forms (e.g., tulip, rose, magnolia). For syntactic categories, in contrast, certain form features do reliably correlate with category membership. This difference may, in part, explain why certain syntactic effects typically occur much earlier than lexical-semantic effects. Copyright © 2011 Elsevier Inc. All rights reserved.
Neural Correlates of Intersensory Processing in Five-Month-Old Infants
Reynolds, Greg D.; Bahrick, Lorraine E.; Lickliter, Robert; Guy, Maggie W.
2014-01-01
Two experiments assessing event-related potentials in 5-month-old infants were conducted to examine neural correlates of attentional salience and efficiency of processing of a visual event (woman speaking) paired with redundant (synchronous) speech, nonredundant (asynchronous) speech, or no speech. In Experiment 1, the Nc component associated with attentional salience was greater in amplitude following synchronous audiovisual as compared with asynchronous audiovisual and unimodal visual presentations. A block design was utilized in Experiment 2 to examine efficiency of processing of a visual event. Only infants exposed to synchronous audiovisual speech demonstrated a significant reduction in amplitude of the late slow wave associated with successful stimulus processing and recognition memory from early to late blocks of trials. These findings indicate that events that provide intersensory redundancy are associated with enhanced neural responsiveness indicative of greater attentional salience and more efficient stimulus processing as compared with the same events when they provide no intersensory redundancy in 5-month-old infants. PMID:23423948
Aviezer, Hillel; Hassin, Ran. R.; Perry, Anat; Dudarev, Veronica; Bentin, Shlomo
2012-01-01
The current study examined the nature of deficits in emotion recognition from facial expressions in case LG, an individual with a rare form of developmental visual agnosia (DVA). LG presents with profoundly impaired recognition of facial expressions, yet the underlying nature of his deficit remains unknown. During typical face processing, normal sighted individuals extract information about expressed emotions from face regions with activity diagnostic for specific emotion categories. Given LG’s impairment, we sought to shed light on his emotion perception by examining if priming facial expressions with diagnostic emotional face components would facilitate his recognition of the emotion expressed by the face. LG and control participants matched isolated face components with components appearing in a subsequently presented full-face and then categorized the face’s emotion. Critically, the matched components were from regions which were diagnostic or non-diagnostic of the emotion portrayed by the full face. In experiment 1, when the full faces were briefly presented (150 ms), LG’s performance was strongly influenced by the diagnosticity of the components: His emotion recognition was boosted within normal limits when diagnostic components were used and was obliterated when non-diagnostic components were used. By contrast, in experiment 2, when the face-exposure duration was extended (2000 ms), the beneficial effect of the diagnostic matching was diminished as was the detrimental effect of the non-diagnostic matching. These data highlight the impact of diagnostic facial features in normal expression recognition and suggest that impaired emotion recognition in DVA results from deficient visual integration across diagnostic face components. PMID:22349446
Avey, Marc T; Phillmore, Leslie S; MacDougall-Shackleton, Scott A
2005-12-07
Sensory driven immediate early gene expression (IEG) has been a key tool to explore auditory perceptual areas in the avian brain. Most work on IEG expression in songbirds such as zebra finches has focused on playback of acoustic stimuli and its effect on auditory processing areas such as caudal medial mesopallium (CMM) caudal medial nidopallium (NCM). However, in a natural setting, the courtship displays of songbirds (including zebra finches) include visual as well as acoustic components. To determine whether the visual stimulus of a courting male modifies song-induced expression of the IEG ZENK in the auditory forebrain we exposed male and female zebra finches to acoustic (song) and visual (dancing) components of courtship. Birds were played digital movies with either combined audio and video, audio only, video only, or neither audio nor video (control). We found significantly increased levels of Zenk response in the auditory region CMM in the two treatment groups exposed to acoustic stimuli compared to the control group. The video only group had an intermediate response, suggesting potential effect of visual input on activity in these auditory brain regions. Finally, we unexpectedly found a lateralization of Zenk response that was independent of sex, brain region, or treatment condition, such that Zenk immunoreactivity was consistently higher in the left hemisphere than in the right and the majority of individual birds were left-hemisphere dominant.
The Role of Aging in Intra-Item and Item-Context Binding Processes in Visual Working Memory
ERIC Educational Resources Information Center
Peterson, Dwight J.; Naveh-Benjamin, Moshe
2016-01-01
Aging is accompanied by declines in both working memory and long-term episodic memory processes. Specifically, important age-related memory deficits are characterized by performance impairments exhibited by older relative to younger adults when binding distinct components into a single integrated representation, despite relatively intact memory…
Reading and Visual Processing in Greek Dyslexic Children: An Eye-Movement Study
ERIC Educational Resources Information Center
Hatzidaki, Anna; Gianneli, Maria; Petrakis, Eftichis; Makaronas, Nikolaos; Aslanides, Ioannis M.
2011-01-01
We examined the impact of the effects of dyslexia on various processing and cognitive components (e.g., reading speed and accuracy) in a language with high phonological and orthographic consistency. Greek dyslexic children were compared with a chronological age-matched group on tasks that tested participants' phonological and orthographic…
Lithari, C; Frantzidis, C A; Papadelis, C; Vivas, Ana B; Klados, M A; Kourtidou-Papadeli, C; Pappas, C; Ioannides, A A; Bamidis, P D
2010-03-01
Men and women seem to process emotions and react to them differently. Yet, few neurophysiological studies have systematically investigated gender differences in emotional processing. Here, we studied gender differences using Event Related Potentials (ERPs) and Skin Conductance Responses (SCR) recorded from participants who passively viewed emotional pictures selected from the International Affective Picture System (IAPS). The arousal and valence dimension of the stimuli were manipulated orthogonally. The peak amplitude and peak latency of ERP components and SCR were analyzed separately, and the scalp topographies of significant ERP differences were documented. Females responded with enhanced negative components (N100 and N200), in comparison to males, especially to the unpleasant visual stimuli, whereas both genders responded faster to high arousing or unpleasant stimuli. Scalp topographies revealed more pronounced gender differences on central and left hemisphere areas. Our results suggest a difference in the way emotional stimuli are processed by genders: unpleasant and high arousing stimuli evoke greater ERP amplitudes in women relatively to men. It also seems that unpleasant or high arousing stimuli are temporally prioritized during visual processing by both genders.
Multispectral image analysis for object recognition and classification
NASA Astrophysics Data System (ADS)
Viau, C. R.; Payeur, P.; Cretu, A.-M.
2016-05-01
Computer and machine vision applications are used in numerous fields to analyze static and dynamic imagery in order to assist or automate decision-making processes. Advancements in sensor technologies now make it possible to capture and visualize imagery at various wavelengths (or bands) of the electromagnetic spectrum. Multispectral imaging has countless applications in various fields including (but not limited to) security, defense, space, medical, manufacturing and archeology. The development of advanced algorithms to process and extract salient information from the imagery is a critical component of the overall system performance. The fundamental objective of this research project was to investigate the benefits of combining imagery from the visual and thermal bands of the electromagnetic spectrum to improve the recognition rates and accuracy of commonly found objects in an office setting. A multispectral dataset (visual and thermal) was captured and features from the visual and thermal images were extracted and used to train support vector machine (SVM) classifiers. The SVM's class prediction ability was evaluated separately on the visual, thermal and multispectral testing datasets.
Neural Representation of Motion-In-Depth in Area MT
Sanada, Takahisa M.
2014-01-01
Neural processing of 2D visual motion has been studied extensively, but relatively little is known about how visual cortical neurons represent visual motion trajectories that include a component toward or away from the observer (motion in depth). Psychophysical studies have demonstrated that humans perceive motion in depth based on both changes in binocular disparity over time (CD cue) and interocular velocity differences (IOVD cue). However, evidence for neurons that represent motion in depth has been limited, especially in primates, and it is unknown whether such neurons make use of CD or IOVD cues. We show that approximately one-half of neurons in macaque area MT are selective for the direction of motion in depth, and that this selectivity is driven primarily by IOVD cues, with a small contribution from the CD cue. Our results establish that area MT, a central hub of the primate visual motion processing system, contains a 3D representation of visual motion. PMID:25411481
NASA Astrophysics Data System (ADS)
Quinn, J. D.; Larour, E. Y.; Cheng, D. L. C.; Halkides, D. J.
2016-12-01
The Virtual Earth System Laboratory (VESL) is a Web-based tool, under development at the Jet Propulsion Laboratory and UC Irvine, for the visualization of Earth System data and process simulations. It contains features geared toward a range of applications, spanning research and outreach. It offers an intuitive user interface, in which model inputs are changed using sliders and other interactive components. Current capabilities include simulation of polar ice sheet responses to climate forcing, based on NASA's Ice Sheet System Model (ISSM). We believe that the visualization of data is most effective when tailored to the target audience, and that many of the best practices for modern Web design/development can be applied directly to the visualization of data: use of negative space, color schemes, typography, accessibility standards, tooltips, etc cetera. We present our prototype website, and invite input from potential users, including researchers, educators, and students.
Greven, Inez M; Ramsey, Richard
2017-02-01
The majority of human neuroscience research has focussed on understanding functional organisation within segregated patches of cortex. The ventral visual stream has been associated with the detection of physical features such as faces and body parts, whereas the theory-of-mind network has been associated with making inferences about mental states and underlying character, such as whether someone is friendly, selfish, or generous. To date, however, it is largely unknown how such distinct processing components integrate neural signals. Using functional magnetic resonance imaging and connectivity analyses, we investigated the contribution of functional integration to social perception. During scanning, participants observed bodies that had previously been associated with trait-based or neutral information. Additionally, we independently localised the body perception and theory-of-mind networks. We demonstrate that when observing someone who cues the recall of stored social knowledge compared to non-social knowledge, a node in the ventral visual stream (extrastriate body area) shows greater coupling with part of the theory-of-mind network (temporal pole). These results show that functional connections provide an interface between perceptual and inferential processing components, thus providing neurobiological evidence that supports the view that understanding the visual environment involves interplay between conceptual knowledge and perceptual processing. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Electrophysiological evidence for parallel and serial processing during visual search.
Luck, S J; Hillyard, S A
1990-12-01
Event-related potentials were recorded from young adults during a visual search task in order to evaluate parallel and serial models of visual processing in the context of Treisman's feature integration theory. Parallel and serial search strategies were produced by the use of feature-present and feature-absent targets, respectively. In the feature-absent condition, the slopes of the functions relating reaction time and latency of the P3 component to set size were essentially identical, indicating that the longer reaction times observed for larger set sizes can be accounted for solely by changes in stimulus identification and classification time, rather than changes in post-perceptual processing stages. In addition, the amplitude of the P3 wave on target-present trials in this condition increased with set size and was greater when the preceding trial contained a target, whereas P3 activity was minimal on target-absent trials. These effects are consistent with the serial self-terminating search model and appear to contradict parallel processing accounts of attention-demanding visual search performance, at least for a subset of search paradigms. Differences in ERP scalp distributions further suggested that different physiological processes are utilized for the detection of feature presence and absence.
Visual Complexity and Affect: Ratings Reflect More Than Meets the Eye.
Madan, Christopher R; Bayer, Janine; Gamer, Matthias; Lonsdorf, Tina B; Sommer, Tobias
2017-01-01
Pictorial stimuli can vary on many dimensions, several aspects of which are captured by the term 'visual complexity.' Visual complexity can be described as, "a picture of a few objects, colors, or structures would be less complex than a very colorful picture of many objects that is composed of several components." Prior studies have reported a relationship between affect and visual complexity, where complex pictures are rated as more pleasant and arousing. However, a relationship in the opposite direction, an effect of affect on visual complexity, is also possible; emotional arousal and valence are known to influence selective attention and visual processing. In a series of experiments, we found that ratings of visual complexity correlated with affective ratings, and independently also with computational measures of visual complexity. These computational measures did not correlate with affect, suggesting that complexity ratings are separately related to distinct factors. We investigated the relationship between affect and ratings of visual complexity, finding an 'arousal-complexity bias' to be a robust phenomenon. Moreover, we found this bias could be attenuated when explicitly indicated but did not correlate with inter-individual difference measures of affective processing, and was largely unrelated to cognitive and eyetracking measures. Taken together, the arousal-complexity bias seems to be caused by a relationship between arousal and visual processing as it has been described for the greater vividness of arousing pictures. The described arousal-complexity bias is also of relevance from an experimental perspective because visual complexity is often considered a variable to control for when using pictorial stimuli.
Visual Complexity and Affect: Ratings Reflect More Than Meets the Eye
Madan, Christopher R.; Bayer, Janine; Gamer, Matthias; Lonsdorf, Tina B.; Sommer, Tobias
2018-01-01
Pictorial stimuli can vary on many dimensions, several aspects of which are captured by the term ‘visual complexity.’ Visual complexity can be described as, “a picture of a few objects, colors, or structures would be less complex than a very colorful picture of many objects that is composed of several components.” Prior studies have reported a relationship between affect and visual complexity, where complex pictures are rated as more pleasant and arousing. However, a relationship in the opposite direction, an effect of affect on visual complexity, is also possible; emotional arousal and valence are known to influence selective attention and visual processing. In a series of experiments, we found that ratings of visual complexity correlated with affective ratings, and independently also with computational measures of visual complexity. These computational measures did not correlate with affect, suggesting that complexity ratings are separately related to distinct factors. We investigated the relationship between affect and ratings of visual complexity, finding an ‘arousal-complexity bias’ to be a robust phenomenon. Moreover, we found this bias could be attenuated when explicitly indicated but did not correlate with inter-individual difference measures of affective processing, and was largely unrelated to cognitive and eyetracking measures. Taken together, the arousal-complexity bias seems to be caused by a relationship between arousal and visual processing as it has been described for the greater vividness of arousing pictures. The described arousal-complexity bias is also of relevance from an experimental perspective because visual complexity is often considered a variable to control for when using pictorial stimuli. PMID:29403412
High Resolution X-Ray Micro-CT of Ultra-Thin Wall Space Components
NASA Technical Reports Server (NTRS)
Roth, Don J.; Rauser, R. W.; Bowman, Randy R.; Bonacuse, Peter; Martin, Richard E.; Locci, I. E.; Kelley, M.
2012-01-01
A high resolution micro-CT system has been assembled and is being used to provide optimal characterization for ultra-thin wall space components. The Glenn Research Center NDE Sciences Team, using this CT system, has assumed the role of inspection vendor for the Advanced Stirling Convertor (ASC) project at NASA. This article will discuss many aspects of the development of the CT scanning for this type of component, including CT system overview; inspection requirements; process development, software utilized and developed to visualize, process, and analyze results; calibration sample development; results on actual samples; correlation with optical/SEM characterization; CT modeling; and development of automatic flaw recognition software. Keywords: Nondestructive Evaluation, NDE, Computed Tomography, Imaging, X-ray, Metallic Components, Thin Wall Inspection
Preti, Emanuele; Richetin, Juliette; Suttora, Chiara; Pisani, Alberto
2016-04-30
Dysfunctions in social cognition characterize personality disorders. However, mixed results emerged from literature on emotion processing. Borderline Personality Disorder (BPD) traits are either associated with enhanced emotion recognition, impairments, or equal functioning compared to controls. These apparent contradictions might result from the complexity of emotion recognition tasks used and from individual differences in impulsivity and effortful control. We conducted a study in a sample of undergraduate students (n=80), assessing BPD traits, using an emotion recognition task that requires the processing of only visual information or both visual and acoustic information. We also measured individual differences in impulsivity and effortful control. Results demonstrated the moderating role of some components of impulsivity and effortful control on the capability of BPD traits in predicting anger and happiness recognition. We organized the discussion around the interaction between different components of regulatory functioning and task complexity for a better understanding of emotion recognition in BPD samples. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Fault Detection of Bearing Systems through EEMD and Optimization Algorithm
Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan
2017-01-01
This study proposes a fault detection and diagnosis method for bearing systems using ensemble empirical mode decomposition (EEMD) based feature extraction, in conjunction with particle swarm optimization (PSO), principal component analysis (PCA), and Isomap. First, a mathematical model is assumed to generate vibration signals from damaged bearing components, such as the inner-race, outer-race, and rolling elements. The process of decomposing vibration signals into intrinsic mode functions (IMFs) and extracting statistical features is introduced to develop a damage-sensitive parameter vector. Finally, PCA and Isomap algorithm are used to classify and visualize this parameter vector, to separate damage characteristics from healthy bearing components. Moreover, the PSO-based optimization algorithm improves the classification performance by selecting proper weightings for the parameter vector, to maximize the visualization effect of separating and grouping of parameter vectors in three-dimensional space. PMID:29143772
NASA Astrophysics Data System (ADS)
Hampton, E. J.; Medling, A. M.; Groves, B.; Kewley, L.; Dopita, M.; Davies, R.; Ho, I.-T.; Kaasinen, M.; Leslie, S.; Sharp, R.; Sweet, S. M.; Thomas, A. D.; Allen, J.; Bland-Hawthorn, J.; Brough, S.; Bryant, J. J.; Croom, S.; Goodwin, M.; Green, A.; Konstantantopoulos, I. S.; Lawrence, J.; López-Sánchez, Á. R.; Lorente, N. P. F.; McElroy, R.; Owers, M. S.; Richards, S. N.; Shastri, P.
2017-09-01
Integral field spectroscopy (IFS) surveys are changing how we study galaxies and are creating vastly more spectroscopic data available than before. The large number of resulting spectra makes visual inspection of emission line fits an infeasible option. Here, we present a demonstration of an artificial neural network (ANN) that determines the number of Gaussian components needed to describe the complex emission line velocity structures observed in galaxies after being fit with lzifu. We apply our ANN to IFS data for the S7 survey, conducted using the Wide Field Spectrograph on the ANU 2.3 m Telescope, and the SAMI Galaxy Survey, conducted using the SAMI instrument on the 4 m Anglo-Australian Telescope. We use the spectral fitting code lzifu (Ho et al. 2016a) to fit the emission line spectra of individual spaxels from S7 and SAMI data cubes with 1-, 2- and 3-Gaussian components. We demonstrate that using an ANN is comparable to astronomers performing the same visual inspection task of determining the best number of Gaussian components to describe the physical processes in galaxies. The advantage of our ANN is that it is capable of processing the spectra for thousands of galaxies in minutes, as compared to the years this task would take individual astronomers to complete by visual inspection.
Böcker, K B E; Gerritsen, J; Hunault, C C; Kruidenier, M; Mensinga, Tj T; Kenemans, J L
2010-07-01
Cannabis intake has been reported to affect cognitive functions such as selective attention. This study addressed the effects of exposure to cannabis with up to 69.4mg Delta(9)-tetrahydrocannabinol (THC) on Event-Related Potentials (ERPs) recorded during a visual selective attention task. Twenty-four participants smoked cannabis cigarettes with four doses of THC on four test days in a randomized, double blind, placebo-controlled, crossover study. Two hours after THC exposure the participants performed a visual selective attention task and concomitant ERPs were recorded. Accuracy decreased linearly and reaction times increased linearly with THC dose. However, performance measures and most of the ERP components related specifically to selective attention did not show significant dose effects. Only in relatively light cannabis users the Occipital Selection Negativity decreased linearly with dose. Furthermore, ERP components reflecting perceptual processing, as well as the P300 component, decreased in amplitude after THC exposure. Only the former effect showed a linear dose-response relation. The decrements in performance and ERP amplitudes induced by exposure to cannabis with high THC content resulted from a non-selective decrease in attentional or processing resources. Performance requiring attentional resources, such as vehicle control, may be compromised several hours after smoking cannabis cigarettes containing high doses of THC, as presently available in Europe and Northern America. Copyright 2010 Elsevier Inc. All rights reserved.
Visualization of spiral ganglion neurites within the scala tympani with a cochlear implant in situ
Chikar, Jennifer A.; Batts, Shelley A.; Pfingst, Bryan E.; Raphael, Yehoash
2009-01-01
Current cochlear histology methods do not allow in situ processing of cochlear implants. The metal components of the implant preclude standard embedding and mid-modiolar sectioning, and whole mounts do not have the spatial resolution needed to view the implant within the scala tympani. One focus of recent auditory research is the regeneration of structures within the cochlea, particularly the ganglion cells and their processes, and there are multiple potential benefits to cochlear implant users from this work. To facilitate experimental investigations of auditory nerve regeneration performed in conjunction with cochlear implantation, it is critical to visualize the cochlear tissue and the implant together to determine if the nerve has made contact with the implant. This paper presents a novel histological technique that enables simultaneous visualization of the in situ cochlear implant and neurofilament – labeled nerve processes within the scala tympani, and the spatial relationship between them. PMID:19428528
Visualization of spiral ganglion neurites within the scala tympani with a cochlear implant in situ.
Chikar, Jennifer A; Batts, Shelley A; Pfingst, Bryan E; Raphael, Yehoash
2009-05-15
Current cochlear histology methods do not allow in situ processing of cochlear implants. The metal components of the implant preclude standard embedding and mid-modiolar sectioning, and whole mounts do not have the spatial resolution needed to view the implant within the scala tympani. One focus of recent auditory research is the regeneration of structures within the cochlea, particularly the ganglion cells and their processes, and there are multiple potential benefits to cochlear implant users from this work. To facilitate experimental investigations of auditory nerve regeneration performed in conjunction with cochlear implantation, it is critical to visualize the cochlear tissue and the implant together to determine if the nerve has made contact with the implant. This paper presents a novel histological technique that enables simultaneous visualization of the in situ cochlear implant and neurofilament-labeled nerve processes within the scala tympani, and the spatial relationship between them.
Interactions Dominate the Dynamics of Visual Cognition
Stephen, Damian G.; Mirman, Daniel
2010-01-01
Many cognitive theories have described behavior as the summation of independent contributions from separate components. Contrasting views have emphasized the importance of multiplicative interactions and emergent structure. We describe a statistical approach to distinguishing additive and multiplicative processes and apply it to the dynamics of eye movements during classic visual cognitive tasks. The results reveal interaction-dominant dynamics in eye movements in each of the three tasks, and that fine-grained eye movements are modulated by task constraints. These findings reveal the interactive nature of cognitive processing and are consistent with theories that view cognition as an emergent property of processes that are broadly distributed over many scales of space and time rather than a componential assembly line. PMID:20070957
Real-Time Cognitive Computing Architecture for Data Fusion in a Dynamic Environment
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Duong, Vu A.
2012-01-01
A novel cognitive computing architecture is conceptualized for processing multiple channels of multi-modal sensory data streams simultaneously, and fusing the information in real time to generate intelligent reaction sequences. This unique architecture is capable of assimilating parallel data streams that could be analog, digital, synchronous/asynchronous, and could be programmed to act as a knowledge synthesizer and/or an "intelligent perception" processor. In this architecture, the bio-inspired models of visual pathway and olfactory receptor processing are combined as processing components, to achieve the composite function of "searching for a source of food while avoiding the predator." The architecture is particularly suited for scene analysis from visual data and odorant.
The effects of combined caffeine and glucose drinks on attention in the human brain.
Rao, Anling; Hu, Henglong; Nobre, Anna Christina
2005-06-01
The objective of this research was to measure the effects of energising drinks containing caffeine and glucose, upon mental activity during sustained selective attention. Non-invasive electrophysiological brain recordings were made during a behavioural study of selective attention in which participants received either energising or placebo drinks. We tested specifically whether energising drinks have significant effects upon behavioural measures of performance during a task requiring sustained visual selective attention, as well as on accompanying components of the event-related potential (ERPs) related to information processing in the brain. Forty healthy volunteers were blindly assigned to receive either the energising drink or a similar-tasting placebo drink. The behavioural task involved identifying predefined target stimulus among rapidly presented streams of peripheral visual stimuli, and making speeded motor responses to this stimulus. During task performance, accuracy, reaction times and ongoing brain activity were stored for analysis. The energising drink enhanced behavioural performance both in terms of accuracy and speed of reactions. The energising drink also had significant effects upon the event-related potentials. Effects started from the enhancement of the earliest components (Cl/P1), reflecting early visual cortical processing in the energising-drink group relative to the placebo group over the contralateral scalp. The later N1, N2 and P3 components related to decision-making and responses were also modulated by the energising drink. Energising drinks containing caffeine and glucose can enhance behavioural performance during demanding tasks requiring selective attention. The behavioural benefits are coupled to direct effects upon neural information processing.
DataFed: A Federated Data System for Visualization and Analysis of Spatio-Temporal Air Quality Data
NASA Astrophysics Data System (ADS)
Husar, R. B.; Hoijarvi, K.
2017-12-01
DataFed is a distributed web-services-based computing environment for accessing, processing, and visualizing atmospheric data in support of air quality science and management. The flexible, adaptive environment facilitates the access and flow of atmospheric data from provider to users by enabling the creation of user-driven data processing/visualization applications. DataFed `wrapper' components, non-intrusively wrap heterogeneous, distributed datasets for access by standards-based GIS web services. The mediator components (also web services) map the heterogeneous data into a spatio-temporal data model. Chained web services provide homogeneous data views (e.g., geospatial, time views) using a global multi-dimensional data model. In addition to data access and rendering, the data processing component services can be programmed for filtering, aggregation, and fusion of multidimensional data. A complete application software is written in a custom made data flow language. Currently, the federated data pool consists of over 50 datasets originating from globally distributed data providers delivering surface-based air quality measurements, satellite observations, emissions data as well as regional and global-scale air quality models. The web browser-based user interface allows point and click navigation and browsing the XYZT multi-dimensional data space. The key applications of DataFed are for exploring spatial pattern of pollutants, seasonal, weekly, diurnal cycles and frequency distributions for exploratory air quality research. Since 2008, DataFed has been used to support EPA in the implementation of the Exceptional Event Rule. The data system is also used at universities in the US, Europe and Asia.
Sheliga, Boris M.; Quaia, Christian; FitzGibbon, Edmond J.; Cumming, Bruce G.
2016-01-01
White noise stimuli are frequently used to study the visual processing of broadband images in the laboratory. A common goal is to describe how responses are derived from Fourier components in the image. We investigated this issue by recording the ocular-following responses (OFRs) to white noise stimuli in human subjects. For a given speed we compared OFRs to unfiltered white noise with those to noise filtered with band-pass filters and notch filters. Removing components with low spatial frequency (SF) reduced OFR magnitudes, and the SF associated with the greatest reduction matched the SF that produced the maximal response when presented alone. This reduction declined rapidly with SF, compatible with a winner-take-all operation. Removing higher SF components increased OFR magnitudes. For higher speeds this effect became larger and propagated toward lower SFs. All of these effects were quantitatively well described by a model that combined two factors: (a) an excitatory drive that reflected the OFRs to individual Fourier components and (b) a suppression by higher SF channels where the temporal sampling of the display led to flicker. This nonlinear interaction has an important practical implication: Even with high refresh rates (150 Hz), the temporal sampling introduced by visual displays has a significant impact on visual processing. For instance, we show that this distorts speed tuning curves, shifting the peak to lower speeds. Careful attention to spectral content, in the light of this nonlinearity, is necessary to minimize the resulting artifact when using white noise patterns undergoing apparent motion. PMID:26762277
Liu, Tianyin; Yeh, Su-Ling
2018-01-01
The left-side bias (LSB) effect observed in face and expert Chinese character perception is suggested to be an expertise marker for visual object recognition. However, in character perception this effect is limited to characters printed in a familiar font (font-sensitive LSB effect). Here we investigated whether the LSB and font-sensitive LSB effects depend on participants’ familiarity with global structure or local component information of the stimuli through examining their transfer effects across simplified and traditional Chinese scripts: the two Chinese scripts share similar overall structures but differ in the visual complexity of local components in general. We found that LSB in expert Chinese character processing could be transferred to the Chinese script that the readers are unfamiliar with. In contrast, the font-sensitive LSB effect did not transfer, and was limited to characters with the visual complexity the readers were most familiar with. These effects suggest that the LSB effect may be generalized to another visual category with similar overall structures; in contrast, effects of within-category variations such as fonts may depend on familiarity with local component information of the stimuli, and thus may be limited to the exemplars of the category that experts are typically exposed to. PMID:29608570
Tschechne, Stephan; Neumann, Heiko
2014-01-01
Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1–V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy. PMID:25157228
Tschechne, Stephan; Neumann, Heiko
2014-01-01
Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.
Envision: An interactive system for the management and visualization of large geophysical data sets
NASA Technical Reports Server (NTRS)
Searight, K. R.; Wojtowicz, D. P.; Walsh, J. E.; Pathi, S.; Bowman, K. P.; Wilhelmson, R. B.
1995-01-01
Envision is a software project at the University of Illinois and Texas A&M, funded by NASA's Applied Information Systems Research Project. It provides researchers in the geophysical sciences convenient ways to manage, browse, and visualize large observed or model data sets. Envision integrates data management, analysis, and visualization of geophysical data in an interactive environment. It employs commonly used standards in data formats, operating systems, networking, and graphics. It also attempts, wherever possible, to integrate with existing scientific visualization and analysis software. Envision has an easy-to-use graphical interface, distributed process components, and an extensible design. It is a public domain package, freely available to the scientific community.
Robotic Vision-Based Localization in an Urban Environment
NASA Technical Reports Server (NTRS)
Mchenry, Michael; Cheng, Yang; Matthies
2007-01-01
A system of electronic hardware and software, now undergoing development, automatically estimates the location of a robotic land vehicle in an urban environment using a somewhat imprecise map, which has been generated in advance from aerial imagery. This system does not utilize the Global Positioning System and does not include any odometry, inertial measurement units, or any other sensors except a stereoscopic pair of black-and-white digital video cameras mounted on the vehicle. Of course, the system also includes a computer running software that processes the video image data. The software consists mostly of three components corresponding to the three major image-data-processing functions: Visual Odometry This component automatically tracks point features in the imagery and computes the relative motion of the cameras between sequential image frames. This component incorporates a modified version of a visual-odometry algorithm originally published in 1989. The algorithm selects point features, performs multiresolution area-correlation computations to match the features in stereoscopic images, tracks the features through the sequence of images, and uses the tracking results to estimate the six-degree-of-freedom motion of the camera between consecutive stereoscopic pairs of images (see figure). Urban Feature Detection and Ranging Using the same data as those processed by the visual-odometry component, this component strives to determine the three-dimensional (3D) coordinates of vertical and horizontal lines that are likely to be parts of, or close to, the exterior surfaces of buildings. The basic sequence of processes performed by this component is the following: 1. An edge-detection algorithm is applied, yielding a set of linked lists of edge pixels, a horizontal-gradient image, and a vertical-gradient image. 2. Straight-line segments of edges are extracted from the linked lists generated in step 1. Any straight-line segments longer than an arbitrary threshold (e.g., 30 pixels) are assumed to belong to buildings or other artificial objects. 3. A gradient-filter algorithm is used to test straight-line segments longer than the threshold to determine whether they represent edges of natural or artificial objects. In somewhat oversimplified terms, the test is based on the assumption that the gradient of image intensity varies little along a segment that represents the edge of an artificial object.
Wind Tunnel Data Fusion and Immersive Visualization: A Case Study
NASA Technical Reports Server (NTRS)
Severance, Kurt; Brewster, Paul; Lazos, Barry; Keefe, Daniel
2001-01-01
This case study describes the process of fusing the data from several wind tunnel experiments into a single coherent visualization. Each experiment was conducted independently and was designed to explore different flow features around airplane landing gear. In the past, it would have been very difficult to correlate results from the different experiments. However, with a single 3-D visualization representing the fusion of the three experiments, significant insight into the composite flowfield was observed that would have been extremely difficult to obtain by studying its component parts. The results are even more compelling when viewed in an immersive environment.
Analysis of retinal and cortical components of Retinex algorithms
NASA Astrophysics Data System (ADS)
Yeonan-Kim, Jihyun; Bertalmío, Marcelo
2017-05-01
Following Land and McCann's first proposal of the Retinex theory, numerous Retinex algorithms that differ considerably both algorithmically and functionally have been developed. We clarify the relationships among various Retinex families by associating their spatial processing structures to the neural organizations in the retina and the primary visual cortex in the brain. Some of the Retinex algorithms have a retina-like processing structure (Land's designator idea and NASA Retinex), and some show a close connection with the cortical structures in the primary visual area of the brain (two-dimensional L&M Retinex). A third group of Retinexes (the variational Retinex) manifests an explicit algorithmic relation to Wilson-Cowan's physiological model. We intend to overview these three groups of Retinexes with the frame of reference in the biological visual mechanisms.
De Sanctis, Pierfilippo; Katz, Richard; Wylie, Glenn R; Sehatpour, Pejman; Alexopoulos, George S; Foxe, John J
2008-10-01
Evidence has emerged for age-related amplification of basic sensory processing indexed by early components of the visual evoked potential (VEP). However, since these age-related effects have been incidental to the main focus of these studies, it is unclear whether they are performance dependent or alternately, represent intrinsic sensory processing changes. High-density VEPs were acquired from 19 healthy elderly and 15 young control participants who viewed alphanumeric stimuli in the absence of any active task. The data show both enhanced and delayed neural responses within structures of the ventral visual stream, with reduced hemispheric asymmetry in the elderly that may be indicative of a decline in hemispheric specialization. Additionally, considerably enhanced early frontal cortical activation was observed in the elderly, suggesting frontal hyper-activation. These age-related differences in early sensory processing are discussed in terms of recent proposals that normal aging involves large-scale compensatory reorganization. Our results suggest that such compensatory mechanisms are not restricted to later higher-order cognitive processes but may also be a feature of early sensory-perceptual processes.
MeetingVis: Visual Narratives to Assist in Recalling Meeting Context and Content.
Shi, Yang; Bryan, Chris; Bhamidipati, Sridatt; Zhao, Ying; Zhang, Yaoxue; Ma, Kwan-Liu
2018-06-01
In team-based workplaces, reviewing and reflecting on the content from a previously held meeting can lead to better planning and preparation. However, ineffective meeting summaries can impair this process, especially when participants have difficulty remembering what was said and what its context was. To assist with this process, we introduce MeetingVis, a visual narrative-based approach to meeting summarization. MeetingVis is composed of two primary components: (1) a data pipeline that processes the spoken audio from a group discussion, and (2) a visual-based interface that efficiently displays the summarized content. To design MeetingVis, we create a taxonomy of relevant meeting data points, identifying salient elements to promote recall and reflection. These are mapped to an augmented storyline visualization, which combines the display of participant activities, topic evolutions, and task assignments. For evaluation, we conduct a qualitative user study with five groups. Feedback from the study indicates that MeetingVis effectively triggers the recall of subtle details from prior meetings: all study participants were able to remember new details, points, and tasks compared to an unaided, memory-only baseline. This visual-based approaches can also potentially enhance the productivity of both individuals and the whole team.
Kavcic, Voyko; Triplett, Regina L.; Das, Anasuya; Martin, Tim; Huxlin, Krystel R.
2015-01-01
Partial cortical blindness is a visual deficit caused by unilateral damage to the primary visual cortex, a condition previously considered beyond hopes of rehabilitation. However, recent data demonstrate that patients may recover both simple and global motion discrimination following intensive training in their blind field. The present experiments characterized motion-induced neural activity of cortically blind (CB) subjects prior to the onset of visual rehabilitation. This was done to provide information about visual processing capabilities available to mediate training-induced visual improvements. Visual Evoked Potentials (VEPs) were recorded from two experimental groups consisting of 9 CB subjects and 9 age-matched, visually-intact controls. VEPs were collected following lateralized stimulus presentation to each of the 4 visual field quadrants. VEP waveforms were examined for both stimulus-onset (SO) and motion-onset (MO) related components in postero-lateral electrodes. While stimulus presentation to intact regions of the visual field elicited normal SO-P1, SO-N1, SO-P2 and MO-N2 amplitudes and latencies in contralateral brain regions of CB subjects, these components were not observed contralateral to stimulus presentation in blind quadrants of the visual field. In damaged brain hemispheres, SO-VEPs were only recorded following stimulus presentation to intact visual field quadrants, via inter-hemispheric transfer. MO-VEPs were only recorded from damaged left brain hemispheres, possibly reflecting a native left/right asymmetry in inter-hemispheric connections. The present findings suggest that damaged brain hemispheres contain areas capable of responding to visual stimulation. However, in the absence of training or rehabilitation, these areas only generate detectable VEPs in response to stimulation of the intact hemifield of vision. PMID:25575450
A Visualization-Based Tutoring Tool for Engineering Education
NASA Astrophysics Data System (ADS)
Nguyen, Tang-Hung; Khoo, I.-Hung
2010-06-01
In engineering disciplines, students usually have hard time to visualize different aspects of engineering analysis and design, which inherently are too complex or abstract to fully understand without the aid of visual explanations or visualizations. As examples, when learning materials and sequences of construction process, students need to visualize how all components of a constructed facility are assembled? Such visualization can not be achieved in a textbook and a traditional lecturing environment. In this paper, the authors present the development of a computer tutoring software, in which different visualization tools including video clips, 3 dimensional models, drawings, pictures/photos together with complementary texts are used to assist students in deeply understanding and effectively mastering materials. The paper will also discuss the implementation and the effectiveness evaluation of the proposed tutoring software, which was used to teach a construction engineering management course offered at California State University, Long Beach.
Symbol processing in the left angular gyrus: evidence from passive perception of digits.
Price, Gavin R; Ansari, Daniel
2011-08-01
Arabic digits are one of the most ubiquitous symbol sets in the world. While there have been many investigations into the neural processing of the semantic information digits represent (e.g. through numerical comparison tasks), little is known about the neural mechanisms which support the processing of digits as visual symbols. To characterise the component neurocognitive mechanisms which underlie numerical cognition, it is essential to understand the processing of digits as a visual category, independent of numerical magnitude processing. The 'Triple Code Model' (Dehaene, 1992; Dehaene and Cohen, 1995) posits an asemantic visual code for processing Arabic digits in the ventral visual stream, yet there is currently little empirical evidence in support of this code. This outstanding question was addressed in the current functional Magnetic Resonance (fMRI) study by contrasting brain responses during the passive viewing of digits versus letters and novel symbols at short (50 ms) and long (500 ms) presentation times. The results of this study reveal increased activation for familiar symbols (digits and letters) relative to unfamiliar symbols (scrambled digits and letters) at long presentation durations in the left dorsal Angular gyrus (dAG). Furthermore, increased activation for Arabic digits was observed in the left ventral Angular gyrus (vAG) in comparison to letters, scrambled digits and scrambled letters at long presentation durations, but no digit specific activation in any region at short presentation durations. These results suggest an absence of a digit specific 'Visual Number Form Area' (VNFA) in the ventral visual cortex, and provide evidence for the role of the left ventral AG during the processing of digits in the absence of any explicit processing demands. We conclude that Arabic digit processing depends specifically on the left AG rather than a ventral visual stream VNFA. Copyright © 2011 Elsevier Inc. All rights reserved.
N270 sensitivity to conflict strength and working memory: A combined ERP and sLORETA study.
Scannella, Sébastien; Pariente, Jérémie; De Boissezon, Xavier; Castel-Lacanal, Evelyne; Chauveau, Nicolas; Causse, Mickaël; Dehais, Frédéric; Pastor, Josette
2016-01-15
The event-related potential N270 component is known to be an electrophysiological marker of the supramodal conflict processing. However little is know about the factors that may modulate its amplitude. In particular, among all studies that have investigated the N270, little or no control of the conflict strength and of the load in working memory have been done leaving a lack in the understanding of this component. We designed a spatial audiovisual conflict task with simultaneous target and cross-modal distractor to evaluate the N270 sensitivity to the conflict strength (i.e., visual target with auditory distractor or auditory target with visual distractor) and the load in working memory (goal task maintenance with frequent change in the target modality). In a first session, participants had to focus on one modality for the target position to be considered (left-hand or right-hand) while the distractor could be at the same side (compatible) or at opposite side (incompatible). In a second session, we used the same set of stimuli as in the first session with an additional distinct auditory signal that clued the participants to frequently switch between the auditory and the visual targets. We found that (1) reaction times and N270 amplitudes for conflicting situations were larger within the auditory target condition compared to the visual one, (2) the increase in target maintenance effort led to equivalent increase of both reaction times and N270 amplitudes within all conditions and (3) the right dorsolateral prefrontal cortex current density was higher for both conflicting and active maintenance of the target situations. These results provide new evidence that the N270 component is an electrophysiological marker of the supramodal conflict processing that is sensitive to the conflict strength and that conflict processing and active maintenance of the task goal are two functions of a common executive attention system. Copyright © 2015 Elsevier B.V. All rights reserved.
Schmithorst, Vincent J; Brown, Rhonda Douglas
2004-07-01
The suitability of a previously hypothesized triple-code model of numerical processing, involving analog magnitude, auditory verbal, and visual Arabic codes of representation, was investigated for the complex mathematical task of the mental addition and subtraction of fractions. Functional magnetic resonance imaging (fMRI) data from 15 normal adult subjects were processed using exploratory group Independent Component Analysis (ICA). Separate task-related components were found with activation in bilateral inferior parietal, left perisylvian, and ventral occipitotemporal areas. These results support the hypothesized triple-code model corresponding to the activated regions found in the individual components and indicate that the triple-code model may be a suitable framework for analyzing the neuropsychological bases of the performance of complex mathematical tasks. Copyright 2004 Elsevier Inc.
The Effects of Training on a Young Child with Cortical Visual Impairment: An Exploratory Study.
ERIC Educational Resources Information Center
Lueck, Amanda Hall; Dornbusch, Helen; Hart, Jeri
1999-01-01
This exploratory study investigated the effects of the components of visual environmental management, visual skills training, and visually dependent task training on the performance of visual behaviors of a toddler with multiple disabilities including cortical visual impairment. Training components were implemented by the mother during daily…
Aviezer, Hillel; Hassin, Ran R; Perry, Anat; Dudarev, Veronica; Bentin, Shlomo
2012-04-01
The current study examined the nature of deficits in emotion recognition from facial expressions in case LG, an individual with a rare form of developmental visual agnosia (DVA). LG presents with profoundly impaired recognition of facial expressions, yet the underlying nature of his deficit remains unknown. During typical face processing, normal sighted individuals extract information about expressed emotions from face regions with activity diagnostic for specific emotion categories. Given LG's impairment, we sought to shed light on his emotion perception by examining if priming facial expressions with diagnostic emotional face components would facilitate his recognition of the emotion expressed by the face. LG and control participants matched isolated face components with components appearing in a subsequently presented full-face and then categorized the face's emotion. Critically, the matched components were from regions which were diagnostic or non-diagnostic of the emotion portrayed by the full face. In experiment 1, when the full faces were briefly presented (150 ms), LG's performance was strongly influenced by the diagnosticity of the components: his emotion recognition was boosted within normal limits when diagnostic components were used and was obliterated when non-diagnostic components were used. By contrast, in experiment 2, when the face-exposure duration was extended (2000 ms), the beneficial effect of the diagnostic matching was diminished as was the detrimental effect of the non-diagnostic matching. These data highlight the impact of diagnostic facial features in normal expression recognition and suggest that impaired emotion recognition in DVA results from deficient visual integration across diagnostic face components. Copyright © 2012 Elsevier Ltd. All rights reserved.
Effect of contrast on the perception of direction of a moving pattern
NASA Technical Reports Server (NTRS)
Stone, L. S.; Watson, A. B.; Mulligan, J. B.
1989-01-01
A series of experiments examining the effect of contrast on the perception of moving plaids was performed to test the hypothesis that the human visual system determines the direction of a moving plaid in a two-staged process: decomposition into component motion followed by application of the intersection-of-contraints rule. Although there is recent evidence that the first tenet of the hypothesis is correct, i.e., that plaid motion is initially decomposed into the motion of the individual grating components, the nature of the second-stage combination rule has not yet been established. It was found that when the gratings within the plaid are of different contrast the preceived direction is not predicted by the intersection-of-constraints rule. There is a strong (up to 20 deg) bias in the direction of the higher-constrast grating. A revised model, which incorporates a contrast-dependent weighting of perceived grating speed as observed for one-dimensional patterns, can quantitatively predict most of the results. The results are then discussed in the context of various models of human visual motion processing and of physiological responses of neurons in the primate visual system.
Lighten the Load: Scaffolding Visual Literacy in Biochemistry and Molecular Biology
Offerdahl, Erika G.; Arneson, Jessie B.; Byrne, Nicholas
2017-01-01
The development of scientific visual literacy has been identified as critical to the training of tomorrow’s scientists and citizens alike. Within the context of the molecular life sciences in particular, visual representations frequently incorporate various components, such as discipline-specific graphical and diagrammatic features, varied levels of abstraction, and spatial arrangements of visual elements to convey information. Visual literacy is achieved when an individual understands the various ways in which a discipline uses these components to represent a particular way of knowing. Owing to the complex nature of visual representations, the activities through which visual literacy is developed have high cognitive load. Cognitive load can be reduced by first helping students to become fluent with the discrete components of visual representations before asking them to simultaneously integrate these components to extract the intended meaning of a representation. We present a taxonomy for characterizing one component of visual representations—the level of abstraction—as a first step in understanding the opportunities afforded students to develop fluency. Further, we demonstrate how our taxonomy can be used to analyze course assessments and spur discussions regarding the extent to which the development of visual literacy skills is supported by instruction within an undergraduate biochemistry curriculum. PMID:28130273
Using component technologies for web based wavelet enhanced mammographic image visualization.
Sakellaropoulos, P; Costaridou, L; Panayiotakis, G
2000-01-01
The poor contrast detectability of mammography can be dealt with by domain specific software visualization tools. Remote desktop client access and time performance limitations of a previously reported visualization tool are addressed, aiming at more efficient visualization of mammographic image resources existing in web or PACS image servers. This effort is also motivated by the fact that at present, web browsers do not support domain-specific medical image visualization. To deal with desktop client access the tool was redesigned by exploring component technologies, enabling the integration of stand alone domain specific mammographic image functionality in a web browsing environment (web adaptation). The integration method is based on ActiveX Document Server technology. ActiveX Document is a part of Object Linking and Embedding (OLE) extensible systems object technology, offering new services in existing applications. The standard DICOM 3.0 part 10 compatible image-format specification Papyrus 3.0 is supported, in addition to standard digitization formats such as TIFF. The visualization functionality of the tool has been enhanced by including a fast wavelet transform implementation, which allows for real time wavelet based contrast enhancement and denoising operations. Initial use of the tool with mammograms of various breast structures demonstrated its potential in improving visualization of diagnostic mammographic features. Web adaptation and real time wavelet processing enhance the potential of the previously reported tool in remote diagnosis and education in mammography.
EEG artifact elimination by extraction of ICA-component features using image processing algorithms.
Radüntz, T; Scouten, J; Hochmuth, O; Meffert, B
2015-03-30
Artifact rejection is a central issue when dealing with electroencephalogram recordings. Although independent component analysis (ICA) separates data in linearly independent components (IC), the classification of these components as artifact or EEG signal still requires visual inspection by experts. In this paper, we achieve automated artifact elimination using linear discriminant analysis (LDA) for classification of feature vectors extracted from ICA components via image processing algorithms. We compare the performance of this automated classifier to visual classification by experts and identify range filtering as a feature extraction method with great potential for automated IC artifact recognition (accuracy rate 88%). We obtain almost the same level of recognition performance for geometric features and local binary pattern (LBP) features. Compared to the existing automated solutions the proposed method has two main advantages: First, it does not depend on direct recording of artifact signals, which then, e.g. have to be subtracted from the contaminated EEG. Second, it is not limited to a specific number or type of artifact. In summary, the present method is an automatic, reliable, real-time capable and practical tool that reduces the time intensive manual selection of ICs for artifact removal. The results are very promising despite the relatively small channel resolution of 25 electrodes. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
The time course of individual face recognition: A pattern analysis of ERP signals.
Nemrodov, Dan; Niemeier, Matthias; Mok, Jenkin Ngo Yin; Nestor, Adrian
2016-05-15
An extensive body of work documents the time course of neural face processing in the human visual cortex. However, the majority of this work has focused on specific temporal landmarks, such as N170 and N250 components, derived through univariate analyses of EEG data. Here, we take on a broader evaluation of ERP signals related to individual face recognition as we attempt to move beyond the leading theoretical and methodological framework through the application of pattern analysis to ERP data. Specifically, we investigate the spatiotemporal profile of identity recognition across variation in emotional expression. To this end, we apply pattern classification to ERP signals both in time, for any single electrode, and in space, across multiple electrodes. Our results confirm the significance of traditional ERP components in face processing. At the same time though, they support the idea that the temporal profile of face recognition is incompletely described by such components. First, we show that signals associated with different facial identities can be discriminated from each other outside the scope of these components, as early as 70ms following stimulus presentation. Next, electrodes associated with traditional ERP components as well as, critically, those not associated with such components are shown to contribute information to stimulus discriminability. And last, the levels of ERP-based pattern discrimination are found to correlate with recognition accuracy across subjects confirming the relevance of these methods for bridging brain and behavior data. Altogether, the current results shed new light on the fine-grained time course of neural face processing and showcase the value of novel methods for pattern analysis to investigating fundamental aspects of visual recognition. Copyright © 2016 Elsevier Inc. All rights reserved.
Cichy, Radoslaw Martin; Pantazis, Dimitrios
2017-09-01
Multivariate pattern analysis of magnetoencephalography (MEG) and electroencephalography (EEG) data can reveal the rapid neural dynamics underlying cognition. However, MEG and EEG have systematic differences in sampling neural activity. This poses the question to which degree such measurement differences consistently bias the results of multivariate analysis applied to MEG and EEG activation patterns. To investigate, we conducted a concurrent MEG/EEG study while participants viewed images of everyday objects. We applied multivariate classification analyses to MEG and EEG data, and compared the resulting time courses to each other, and to fMRI data for an independent evaluation in space. We found that both MEG and EEG revealed the millisecond spatio-temporal dynamics of visual processing with largely equivalent results. Beyond yielding convergent results, we found that MEG and EEG also captured partly unique aspects of visual representations. Those unique components emerged earlier in time for MEG than for EEG. Identifying the sources of those unique components with fMRI, we found the locus for both MEG and EEG in high-level visual cortex, and in addition for MEG in low-level visual cortex. Together, our results show that multivariate analyses of MEG and EEG data offer a convergent and complimentary view on neural processing, and motivate the wider adoption of these methods in both MEG and EEG research. Copyright © 2017 Elsevier Inc. All rights reserved.
Hearing gestures, seeing music: vision influences perceived tone duration.
Schutz, Michael; Lipscomb, Scott
2007-01-01
Percussionists inadvertently use visual information to strategically manipulate audience perception of note duration. Videos of long (L) and short (S) notes performed by a world-renowned percussionist were separated into visual (Lv, Sv) and auditory (La, Sa) components. Visual components contained only the gesture used to perform the note, auditory components the acoustic note itself. Audio and visual components were then crossed to create realistic musical stimuli. Participants were informed of the mismatch, and asked to rate note duration of these audio-visual pairs based on sound alone. Ratings varied based on visual (Lv versus Sv), but not auditory (La versus Sa) components. Therefore while longer gestures do not make longer notes, longer gestures make longer sounding notes through the integration of sensory information. This finding contradicts previous research showing that audition dominates temporal tasks such as duration judgment.
Muñoz-Ruata, J; Caro-Martínez, E; Martínez Pérez, L; Borja, M
2010-12-01
Perception disorders are frequently observed in persons with intellectual disability (ID) and their influence on cognition has been discussed. The objective of this study is to clarify the mechanisms behind these alterations by analysing the visual event related potentials early component, the N1 wave, which is related to perception alterations in several pathologies. Additionally, the relationship between N1 and neuropsychological visual tests was studied with the aim to understand its functional significance in ID persons. A group of 69 subjects, with etiologically heterogeneous mild ID, performed an odd-ball task of active discrimination of geometric figures. N1a (frontal) and N1b (post-occipital) waves were obtained from the evoked potentials. They also performed several neuropsychological tests. Only component N1a, produced by the target stimulus, showed significant correlations with the visual integration, visual semantic association, visual analogical reasoning tests, Perceptual Reasoning Index (Wechsler Intelligence Scale for Children Fourth Edition) and intelligence quotient. The systematic correlations, produced by the target stimulus in perceptual abilities tasks, with the N1a (frontal) and not with N1b (posterior), suggest that the visual perception process involves frontal participation. These correlations support the idea that the N1a and N1b are not equivalent. The relationship between frontal functions and early stages of visual perception is revised and discussed, as well as the frontal contribution with the neuropsychological tests used. A possible relationship between the frontal activity dysfunction in ID and perceptive problems is suggested. Perceptive alteration observed in persons with ID could indeed be because of altered sensory areas, but also to a failure in the frontal participation of perceptive processes conceived as elaborations inside reverberant circuits of perception-action. © 2010 The Authors. Journal of Intellectual Disability Research © 2010 Blackwell Publishing Ltd.
EEGVIS: A MATLAB Toolbox for Browsing, Exploring, and Viewing Large Datasets.
Robbins, Kay A
2012-01-01
Recent advances in data monitoring and sensor technology have accelerated the acquisition of very large data sets. Streaming data sets from instrumentation such as multi-channel EEG recording usually must undergo substantial pre-processing and artifact removal. Even when using automated procedures, most scientists engage in laborious manual examination and processing to assure high quality data and to indentify interesting or problematic data segments. Researchers also do not have a convenient method of method of visually assessing the effects of applying any stage in a processing pipeline. EEGVIS is a MATLAB toolbox that allows users to quickly explore multi-channel EEG and other large array-based data sets using multi-scale drill-down techniques. Customizable summary views reveal potentially interesting sections of data, which users can explore further by clicking to examine using detailed viewing components. The viewer and a companion browser are built on our MoBBED framework, which has a library of modular viewing components that can be mixed and matched to best reveal structure. Users can easily create new viewers for their specific data without any programming during the exploration process. These viewers automatically support pan, zoom, resizing of individual components, and cursor exploration. The toolbox can be used directly in MATLAB at any stage in a processing pipeline, as a plug-in for EEGLAB, or as a standalone precompiled application without MATLAB running. EEGVIS and its supporting packages are freely available under the GNU general public license at http://visual.cs.utsa.edu/eegvis.
Deficient multisensory integration in schizophrenia: an event-related potential study.
Stekelenburg, Jeroen J; Maes, Jan Pieter; Van Gool, Arthur R; Sitskoorn, Margriet; Vroomen, Jean
2013-07-01
In many natural audiovisual events (e.g., the sight of a face articulating the syllable /ba/), the visual signal precedes the sound and thus allows observers to predict the onset and the content of the sound. In healthy adults, the N1 component of the event-related brain potential (ERP), reflecting neural activity associated with basic sound processing, is suppressed if a sound is accompanied by a video that reliably predicts sound onset. If the sound does not match the content of the video (e.g., hearing /ba/ while lipreading /fu/), the later occurring P2 component is affected. Here, we examined whether these visual information sources affect auditory processing in patients with schizophrenia. The electroencephalography (EEG) was recorded in 18 patients with schizophrenia and compared with that of 18 healthy volunteers. As stimuli we used video recordings of natural actions in which visual information preceded and predicted the onset of the sound that was either congruent or incongruent with the video. For the healthy control group, visual information reduced the auditory-evoked N1 if compared to a sound-only condition, and stimulus-congruency affected the P2. This reduction in N1 was absent in patients with schizophrenia, and the congruency effect on the P2 was diminished. Distributed source estimations revealed deficits in the network subserving audiovisual integration in patients with schizophrenia. The results show a deficit in multisensory processing in patients with schizophrenia and suggest that multisensory integration dysfunction may be an important and, to date, under-researched aspect of schizophrenia. Copyright © 2013. Published by Elsevier B.V.
van der Stelt, O; van der Molen, M; Boudewijn Gunning, W; Kok, A
2001-10-01
In order to gain insight into the functional and macroanatomical loci of visual selective processing deficits that may be basic to attention-deficit hyperactivity disorder (ADHD), the present study examined multi-channel event-related potentials (ERPs) recorded from 7- to 11-year-old boys clinically diagnosed as having ADHD (n=24) and age-matched healthy control boys (n=24) while they performed a visual (color) selective attention task. The spatio-temporal dynamics of several ERP components related to attention to color were characterized using topographic profile analysis, topographic mapping of the ERP and associated scalp current density distributions, and spatio-temporal source potential modeling. Boys with ADHD showed a lower target hit rate, a higher false-alarm rate, and a lower perceptual sensitivity than controls. Also, whereas color attention induced in the ERPs from controls a characteristic early frontally maximal selection positivity (FSP), ADHD boys displayed little or no FSP. Similarly, ADHD boys manifested P3b amplitude decrements that were partially lateralized (i.e., maximal at left temporal scalp locations) as well as affected by maturation. These results indicate that ADHD boys suffer from deficits at both relatively early (sensory) and late (semantic) levels of visual selective information processing. The data also support the hypothesis that the visual selective processing deficits observed in the ADHD boys originate from deficits in the strength of activation of a neural network comprising prefrontal and occipito-temporal brain regions. This network seems to be actively engaged during attention to color and may contain the major intracerebral generating sources of the associated scalp-recorded ERP components.
Design of Instrument Control Software for Solar Vector Magnetograph at Udaipur Solar Observatory
NASA Astrophysics Data System (ADS)
Gosain, Sanjay; Venkatakrishnan, P.; Venugopalan, K.
2004-04-01
A magnetograph is an instrument which makes measurement of solar magnetic field by measuring Zeeman induced polarization in solar spectral lines. In a typical filter based magnetograph there are three main modules namely, polarimeter, narrow-band spectrometer (filter), and imager(CCD camera). For a successful operation of magnetograph it is essential that these modules work in synchronization with each other. Here, we describe the design of instrument control system implemented for the Solar Vector Magnetograph under development at Udaipur Solar Observatory. The control software is written in Visual Basic and exploits the Component Object Model (COM) components for a fast and flexible application development. The user can interact with the instrument modules through a Graphical User Interface (GUI) and can program the sequence of magnetograph operations. The integration of Interactive Data Language (IDL) ActiveX components in the interface provides a powerful tool for online visualization, analysis and processing of images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raguvarun, K., E-mail: prajagopal@iitm.ac.in; Balasubramaniam, Krishnan, E-mail: prajagopal@iitm.ac.in; Rajagopal, Prabhu, E-mail: prajagopal@iitm.ac.in
Additive manufacturing methods are gaining increasing popularity for rapidly and efficiently manufacturing parts and components in the industrial context, as well as for domestic applications. However, except when used for prototyping or rapid visualization of components, industries are concerned with the load carrying capacity and strength achievable by additive manufactured parts. In this paper, the wire-arc additive manufacturing (AM) process based on gas tungsten arc welding (GTAW) has been examined for the internal structure and constitution of components generated by the process. High-resolution 3D X-ray tomography is used to gain cut-views through wedge-shaped parts created using this GTAW additive manufacturingmore » process with titanium alloy materials. In this work, two different control conditions for the GTAW process are considered. The studies reveal clusters of porosities, located in periodic spatial intervals along the sample cross-section. Such internal defects can have a detrimental effect on the strength of the resulting AM components, as shown in destructive testing studies. Closer examination of this phenomenon shows that defect clusters are preferentially located at GTAW traversal path intervals. These results highlight the strong need for enhanced control of process parameters in ensuring components with minimal defects and higher strength.« less
NASA Astrophysics Data System (ADS)
Raguvarun, K.; Balasubramaniam, Krishnan; Rajagopal, Prabhu; Palanisamy, Suresh; Nagarajah, Romesh; Hoye, Nicholas; Curiri, Dominic; Kapoor, Ajay
2015-03-01
Additive manufacturing methods are gaining increasing popularity for rapidly and efficiently manufacturing parts and components in the industrial context, as well as for domestic applications. However, except when used for prototyping or rapid visualization of components, industries are concerned with the load carrying capacity and strength achievable by additive manufactured parts. In this paper, the wire-arc additive manufacturing (AM) process based on gas tungsten arc welding (GTAW) has been examined for the internal structure and constitution of components generated by the process. High-resolution 3D X-ray tomography is used to gain cut-views through wedge-shaped parts created using this GTAW additive manufacturing process with titanium alloy materials. In this work, two different control conditions for the GTAW process are considered. The studies reveal clusters of porosities, located in periodic spatial intervals along the sample cross-section. Such internal defects can have a detrimental effect on the strength of the resulting AM components, as shown in destructive testing studies. Closer examination of this phenomenon shows that defect clusters are preferentially located at GTAW traversal path intervals. These results highlight the strong need for enhanced control of process parameters in ensuring components with minimal defects and higher strength.
Frewen, Paul; Thornley, Elizabeth; Rabellino, Daniela; Lanius, Ruth
2017-01-01
ABSTRACT Background: Changes to the diagnostic criteria for PTSD in DSM-5 reflect an increased emphasis on negative cognition referring to self and other, including self-blame, and related pervasive negative affective states including for self-conscious emotions such as guilt and shame. Objective: Investigate the neural correlates of valenced self-referential processing (SRP) and other-referential processing (ORP) in persons with PTSD. Method: We compared response to the Visual-Verbal Self-Other Referential Processing Task in an fMRI study of women with (n = 20) versus without (n = 24) PTSD primarily relating to childhood and interpersonal trauma histories using statistical parametric mapping and group independent component analysis. Results: As compared to women without PTSD, women with PTSD endorsed negative words as more descriptive both of themselves and others, whereas positive words were endorsed as less descriptive both of themselves and others. Women with PTSD also reported a greater experience of negative affect and a lesser experience of positive affect during SRP specifically. Significant differences between groups were observed within independent components defined by ventral- and middle-medial prefrontal corte x, mediolateral parietal cortex, and visual cortex, depending on experimental conditions. Conclusions: This study reveals brain-based disturbances during SRP and ORP in women with PTSD related to interpersonal and developmental trauma. Psychological assessment and treatment should address altered sense of self and affective response to others in PTSD. PMID:28649298
Default Mode Network (DMN) Deactivation during Odor-Visual Association
Karunanayaka, Prasanna R.; Wilson, Donald A.; Tobia, Michael J.; Martinez, Brittany; Meadowcroft, Mark; Eslinger, Paul J.; Yang, Qing X.
2017-01-01
Default mode network (DMN) deactivation has been shown to be functionally relevant for goal-directed cognition. In this study, we investigated the DMN’s role during olfactory processing using two complementary functional magnetic resonance imaging (fMRI) paradigms with identical timing, visual-cue stimulation and response monitoring protocols. Twenty-nine healthy, non-smoking, right-handed adults (mean age = 26±4 yrs., 16 females) completed an odor-visual association fMRI paradigm that had two alternating odor+visual and visual-only trial conditions. During odor+visual trials, a visual cue was presented simultaneously with an odor, while during visual-only trial conditions the same visual cue was presented alone. Eighteen of the 29 participants (mean age = 27.0 ± 6.0 yrs.,11 females) also took part in a control no-odor fMRI paradigm that consisted of visual-only trial conditions which were identical to the visual-only trials in the odor-visual association paradigm. We used Independent Component Analysis (ICA), extended unified structural equation modeling (euSEM), and psychophysiological interaction (PPI) to investigate the interplay between the DMN and olfactory network. In the odor-visual association paradigm, DMN deactivation was evoked by both the odor+visual and visual-only trial conditions. In contrast, the visual-only trials in the no-odor paradigm did not evoke consistent DMN deactivation. In the odor-visual association paradigm, the euSEM and PPI analyses identified a directed connectivity between the DMN and olfactory network which was significantly different between odor+visual and visual-only trial conditions. The results support a strong interaction between the DMN and olfactory network and highlights DMN’s role in task-evoked brain activity and behavioral responses during olfactory processing. PMID:27785847
Smartphone Analytics: Mobilizing the Lab into the Cloud for Omic-Scale Analyses.
Montenegro-Burke, J Rafael; Phommavongsay, Thiery; Aisporna, Aries E; Huan, Tao; Rinehart, Duane; Forsberg, Erica; Poole, Farris L; Thorgersen, Michael P; Adams, Michael W W; Krantz, Gregory; Fields, Matthew W; Northen, Trent R; Robbins, Paul D; Niedernhofer, Laura J; Lairson, Luke; Benton, H Paul; Siuzdak, Gary
2016-10-04
Active data screening is an integral part of many scientific activities, and mobile technologies have greatly facilitated this process by minimizing the reliance on large hardware instrumentation. In order to meet with the increasingly growing field of metabolomics and heavy workload of data processing, we designed the first remote metabolomic data screening platform for mobile devices. Two mobile applications (apps), XCMS Mobile and METLIN Mobile, facilitate access to XCMS and METLIN, which are the most important components in the computer-based XCMS Online platforms. These mobile apps allow for the visualization and analysis of metabolic data throughout the entire analytical process. Specifically, XCMS Mobile and METLIN Mobile provide the capabilities for remote monitoring of data processing, real time notifications for the data processing, visualization and interactive analysis of processed data (e.g., cloud plots, principle component analysis, box-plots, extracted ion chromatograms, and hierarchical cluster analysis), and database searching for metabolite identification. These apps, available on Apple iOS and Google Android operating systems, allow for the migration of metabolomic research onto mobile devices for better accessibility beyond direct instrument operation. The utility of XCMS Mobile and METLIN Mobile functionalities was developed and is demonstrated here through the metabolomic LC-MS analyses of stem cells, colon cancer, aging, and bacterial metabolism.
Smartphone Analytics: Mobilizing the Lab into the Cloud for Omic-Scale Analyses
2016-01-01
Active data screening is an integral part of many scientific activities, and mobile technologies have greatly facilitated this process by minimizing the reliance on large hardware instrumentation. In order to meet with the increasingly growing field of metabolomics and heavy workload of data processing, we designed the first remote metabolomic data screening platform for mobile devices. Two mobile applications (apps), XCMS Mobile and METLIN Mobile, facilitate access to XCMS and METLIN, which are the most important components in the computer-based XCMS Online platforms. These mobile apps allow for the visualization and analysis of metabolic data throughout the entire analytical process. Specifically, XCMS Mobile and METLIN Mobile provide the capabilities for remote monitoring of data processing, real time notifications for the data processing, visualization and interactive analysis of processed data (e.g., cloud plots, principle component analysis, box-plots, extracted ion chromatograms, and hierarchical cluster analysis), and database searching for metabolite identification. These apps, available on Apple iOS and Google Android operating systems, allow for the migration of metabolomic research onto mobile devices for better accessibility beyond direct instrument operation. The utility of XCMS Mobile and METLIN Mobile functionalities was developed and is demonstrated here through the metabolomic LC-MS analyses of stem cells, colon cancer, aging, and bacterial metabolism. PMID:27560777
Smartphone Analytics: Mobilizing the Lab into the Cloud for Omic-Scale Analyses
Montenegro-Burke, J. Rafael; Phommavongsay, Thiery; Aisporna, Aries E.; ...
2016-08-25
Active data screening is an integral part of many scientific activities, and mobile technologies have greatly facilitated this process by minimizing the reliance on large hardware instrumentation. In order to meet with the increasingly growing field of metabolomics and heavy workload of data processing, we designed the first remote metabolomic data screening platform for mobile devices. Two mobile applications (apps), XCMS Mobile and METLIN Mobile, facilitate access to XCMS and METLIN, which are the most important components in the computer-based XCMS Online platforms. These mobile apps allow for the visualization and analysis of metabolic data throughout the entire analytical process.more » Specifically, XCMS Mobile and METLIN Mobile provide the capabilities for remote monitoring of data processing, real time notifications for the data processing, visualization and interactive analysis of processed data (e.g., cloud plots, principle component analysis, box-plots, extracted ion chromatograms, and hierarchical cluster analysis), and database searching for metabolite identification. These apps, available on Apple iOS and Google Android operating systems, allow for the migration of metabolomic research onto mobile devices for better accessibility beyond direct instrument operation. The utility of XCMS Mobile and METLIN Mobile functionalities was developed and is demonstrated here through the metabolomic LC-MS analyses of stem cells, colon cancer, aging, and bacterial metabolism.« less
Smartphone Analytics: Mobilizing the Lab into the Cloud for Omic-Scale Analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montenegro-Burke, J. Rafael; Phommavongsay, Thiery; Aisporna, Aries E.
Active data screening is an integral part of many scientific activities, and mobile technologies have greatly facilitated this process by minimizing the reliance on large hardware instrumentation. In order to meet with the increasingly growing field of metabolomics and heavy workload of data processing, we designed the first remote metabolomic data screening platform for mobile devices. Two mobile applications (apps), XCMS Mobile and METLIN Mobile, facilitate access to XCMS and METLIN, which are the most important components in the computer-based XCMS Online platforms. These mobile apps allow for the visualization and analysis of metabolic data throughout the entire analytical process.more » Specifically, XCMS Mobile and METLIN Mobile provide the capabilities for remote monitoring of data processing, real time notifications for the data processing, visualization and interactive analysis of processed data (e.g., cloud plots, principle component analysis, box-plots, extracted ion chromatograms, and hierarchical cluster analysis), and database searching for metabolite identification. These apps, available on Apple iOS and Google Android operating systems, allow for the migration of metabolomic research onto mobile devices for better accessibility beyond direct instrument operation. The utility of XCMS Mobile and METLIN Mobile functionalities was developed and is demonstrated here through the metabolomic LC-MS analyses of stem cells, colon cancer, aging, and bacterial metabolism.« less
Underlying mechanisms of writing difficulties among children with neurofibromatosis type 1.
Gilboa, Yafit; Josman, Naomi; Fattal-Valevski, Aviva; Toledano-Alhadef, Hagit; Rosenblum, Sara
2014-06-01
Writing is a complex activity in which lower-level perceptual-motor processes and higher-level cognitive processes continuously interact. Preliminary evidence suggests that writing difficulties are common to children with Neurofibromatosis type 1 (NF1). The aim of this study was to compare the performance of children with and without NF1 in lower (visual perception, motor coordination and visual-motor integration) and higher processes (verbal and performance intelligence, visual spatial organization and visual memory) required for intact writing; and to identify the components that predict the written product's spatial arrangement and content among children with NF1. Thirty children with NF1 (ages 8-16) and 30 typically developing children matched by gender and age were tested, using standardized assessments. Children with NF1 had a significantly inferior performance in comparison to control children, on all tests that measured lower and higher level processes. The cognitive planning skill was found as a predictor of the written product's spatial arrangement. The verbal intelligence predicted the written content level. Results suggest that high level processes underlie the poor quality of writing product in children with NF1. Treatment approaches for children with NF1 must include detailed assessments of cognitive planning and language skills. Copyright © 2014 Elsevier Ltd. All rights reserved.
Interactions dominate the dynamics of visual cognition.
Stephen, Damian G; Mirman, Daniel
2010-04-01
Many cognitive theories have described behavior as the summation of independent contributions from separate components. Contrasting views have emphasized the importance of multiplicative interactions and emergent structure. We describe a statistical approach to distinguishing additive and multiplicative processes and apply it to the dynamics of eye movements during classic visual cognitive tasks. The results reveal interaction-dominant dynamics in eye movements in each of the three tasks, and that fine-grained eye movements are modulated by task constraints. These findings reveal the interactive nature of cognitive processing and are consistent with theories that view cognition as an emergent property of processes that are broadly distributed over many scales of space and time rather than a componential assembly line. Copyright 2009 Elsevier B.V. All rights reserved.
Visual recovery in cortical blindness is limited by high internal noise
Cavanaugh, Matthew R.; Zhang, Ruyuan; Melnick, Michael D.; Das, Anasuya; Roberts, Mariel; Tadin, Duje; Carrasco, Marisa; Huxlin, Krystel R.
2015-01-01
Damage to the primary visual cortex typically causes cortical blindness (CB) in the hemifield contralateral to the damaged hemisphere. Recent evidence indicates that visual training can partially reverse CB at trained locations. Whereas training induces near-complete recovery of coarse direction and orientation discriminations, deficits in fine motion processing remain. Here, we systematically disentangle components of the perceptual inefficiencies present in CB fields before and after coarse direction discrimination training. In seven human CB subjects, we measured threshold versus noise functions before and after coarse direction discrimination training in the blind field and at corresponding intact field locations. Threshold versus noise functions were analyzed within the framework of the linear amplifier model and the perceptual template model. Linear amplifier model analysis identified internal noise as a key factor differentiating motion processing across the tested areas, with visual training reducing internal noise in the blind field. Differences in internal noise also explained residual perceptual deficits at retrained locations. These findings were confirmed with perceptual template model analysis, which further revealed that the major residual deficits between retrained and intact field locations could be explained by differences in internal additive noise. There were no significant differences in multiplicative noise or the ability to process external noise. Together, these results highlight the critical role of altered internal noise processing in mediating training-induced visual recovery in CB fields, and may explain residual perceptual deficits relative to intact regions of the visual field. PMID:26389544
Advances in Software Tools for Pre-processing and Post-processing of Overset Grid Computations
NASA Technical Reports Server (NTRS)
Chan, William M.
2004-01-01
Recent developments in three pieces of software for performing pre-processing and post-processing work on numerical computations using overset grids are presented. The first is the OVERGRID graphical interface which provides a unified environment for the visualization, manipulation, generation and diagnostics of geometry and grids. Modules are also available for automatic boundary conditions detection, flow solver input preparation, multiple component dynamics input preparation and dynamics animation, simple solution viewing for moving components, and debris trajectory analysis input preparation. The second is a grid generation script library that enables rapid creation of grid generation scripts. A sample of recent applications will be described. The third is the OVERPLOT graphical interface for displaying and analyzing history files generated by the flow solver. Data displayed include residuals, component forces and moments, number of supersonic and reverse flow points, and various dynamics parameters.
ERIC Educational Resources Information Center
Romanova, Natalia; Gor, Kira
2017-01-01
The study investigated the processing of Russian gender and number agreement by native (n = 36) and nonnative (n = 36) participants using a visual lexical decision task with priming. The design included a baseline condition that helped dissociate the underlying components of priming (facilitation and inhibition). The results showed no differences…
Impaired Visual Motor Coordination in Obese Adults.
Gaul, David; Mat, Arimin; O'Shea, Donal; Issartel, Johann
2016-01-01
Objective. To investigate whether obesity alters the sensory motor integration process and movement outcome during a visual rhythmic coordination task. Methods. 88 participants (44 obese and 44 matched control) sat on a chair equipped with a wrist pendulum oscillating in the sagittal plane. The task was to swing the pendulum in synchrony with a moving visual stimulus displayed on a screen. Results. Obese participants demonstrated significantly ( p < 0.01) higher values for continuous relative phase (CRP) indicating poorer level of coordination, increased movement variability ( p < 0.05), and a larger amplitude ( p < 0.05) than their healthy weight counterparts. Conclusion. These results highlight the existence of visual sensory integration deficiencies for obese participants. The obese group have greater difficulty in synchronizing their movement with a visual stimulus. Considering that visual motor coordination is an essential component of many activities of daily living, any impairment could significantly affect quality of life.
Neuronal Representation of Ultraviolet Visual Stimuli in Mouse Primary Visual Cortex
Tan, Zhongchao; Sun, Wenzhi; Chen, Tsai-Wen; Kim, Douglas; Ji, Na
2015-01-01
The mouse has become an important model for understanding the neural basis of visual perception. Although it has long been known that mouse lens transmits ultraviolet (UV) light and mouse opsins have absorption in the UV band, little is known about how UV visual information is processed in the mouse brain. Using a custom UV stimulation system and in vivo calcium imaging, we characterized the feature selectivity of layer 2/3 neurons in mouse primary visual cortex (V1). In adult mice, a comparable percentage of the neuronal population responds to UV and visible stimuli, with similar pattern selectivity and receptive field properties. In young mice, the orientation selectivity for UV stimuli increased steadily during development, but not direction selectivity. Our results suggest that, by expanding the spectral window through which the mouse can acquire visual information, UV sensitivity provides an important component for mouse vision. PMID:26219604
Microphone Array Phased Processing System (MAPPS): Version 4.0 Manual
NASA Technical Reports Server (NTRS)
Watts, Michael E.; Mosher, Marianne; Barnes, Michael; Bardina, Jorge
1999-01-01
A processing system has been developed to meet increasing demands for detailed noise measurement of individual model components. The Microphone Array Phased Processing System (MAPPS) uses graphical user interfaces to control all aspects of data processing and visualization. The system uses networked parallel computers to provide noise maps at selected frequencies in a near real-time testing environment. The system has been successfully used in the NASA Ames 7- by 10-Foot Wind Tunnel.
Libby, Lisa K; Shaeffer, Eric M; Eibach, Richard P
2009-11-01
Actions do not have inherent meaning but rather can be interpreted in many ways. The interpretation a person adopts has important effects on a range of higher order cognitive processes. One dimension on which interpretations can vary is the extent to which actions are identified abstractly--in relation to broader goals, personal characteristics, or consequences--versus concretely, in terms of component processes. The present research investigated how visual perspective (own 1st-person vs. observer's 3rd-person) in action imagery is related to action identification level. A series of experiments measured and manipulated visual perspective in mental and photographic images to test the connection with action identification level. Results revealed a bidirectional causal relationship linking 3rd-person images and abstract action identifications. These findings highlight the functional role of visual imagery and have implications for understanding how perspective is involved in action perception at the social, cognitive, and neural levels. Copyright 2009 APA
Effects of verbal and nonverbal interference on spatial and object visual working memory.
Postle, Bradley R; Desposito, Mark; Corkin, Suzanne
2005-03-01
We tested the hypothesis that a verbal coding mechanism is necessarily engaged by object, but not spatial, visual working memory tasks. We employed a dual-task procedure that paired n-back working memory tasks with domain-specific distractor trials inserted into each interstimulus interval of the n-back tasks. In two experiments, object n-back performance demonstrated greater sensitivity to verbal distraction, whereas spatial n-back performance demonstrated greater sensitivity to motion distraction. Visual object and spatial working memory may differ fundamentally in that the mnemonic representation of featural characteristics of objects incorporates a verbal (perhaps semantic) code, whereas the mnemonic representation of the location of objects does not. Thus, the processes supporting working memory for these two types of information may differ in more ways than those dictated by the "what/where" organization of the visual system, a fact more easily reconciled with a component process than a memory systems account of working memory function.
Effects of verbal and nonverbal interference on spatial and object visual working memory
POSTLE, BRADLEY R.; D’ESPOSITO, MARK; CORKIN, SUZANNE
2005-01-01
We tested the hypothesis that a verbal coding mechanism is necessarily engaged by object, but not spatial, visual working memory tasks. We employed a dual-task procedure that paired n-back working memory tasks with domain-specific distractor trials inserted into each interstimulus interval of the n-back tasks. In two experiments, object n-back performance demonstrated greater sensitivity to verbal distraction, whereas spatial n-back performance demonstrated greater sensitivity to motion distraction. Visual object and spatial working memory may differ fundamentally in that the mnemonic representation of featural characteristics of objects incorporates a verbal (perhaps semantic) code, whereas the mnemonic representation of the location of objects does not. Thus, the processes supporting working memory for these two types of information may differ in more ways than those dictated by the “what/where” organization of the visual system, a fact more easily reconciled with a component process than a memory systems account of working memory function. PMID:16028575
King, Adam C; Newell, Karl M
2015-10-01
The experiment investigated the effect of selectively augmenting faster time scales of visual feedback information on the learning and transfer of continuous isometric force tracking tasks to test the generality of the self-organization of 1/f properties of force output. Three experimental groups tracked an irregular target pattern either under a standard fixed gain condition or with selectively enhancement in the visual feedback display of intermediate (4-8 Hz) or high (8-12 Hz) frequency components of the force output. All groups reduced tracking error over practice, with the error lowest in the intermediate scaling condition followed by the high scaling and fixed gain conditions, respectively. Selective visual scaling induced persistent changes across the frequency spectrum, with the strongest effect in the intermediate scaling condition and positive transfer to novel feedback displays. The findings reveal an interdependence of the timescales in the learning and transfer of isometric force output frequency structures consistent with 1/f process models of the time scales of motor output variability.
Trautmann-Lengsfeld, Sina Alexa; Herrmann, Christoph Siegfried
2014-02-01
In a previous study, we showed that virtually simulated social group pressure could influence early stages of perception after only 100 ms. In the present EEG study, we investigated the influence of social pressure on visual perception in participants with high (HA) and low (LA) levels of autonomy. Ten HA and ten LA individuals were asked to accomplish a visual discrimination task in an adapted paradigm of Solomon Asch. Results indicate that LA participants adapted to the incorrect group opinion more often than HA participants (42% vs. 30% of the trials, respectively). LA participants showed a larger posterior P1 component contralateral to targets presented in the right visual field when conforming to the correct compared to conforming to the incorrect group decision. In conclusion, our ERP data suggest that the group context can have early effects on our perception rather than on conscious decision processes in LA, but not HA participants. Copyright © 2013 Society for Psychophysiological Research.
Mavritsaki, Eirini; Heinke, Dietmar; Humphreys, Glyn W; Deco, Gustavo
2006-01-01
In the real world, visual information is selected over time as well as space, when we prioritise new stimuli for attention. Watson and Humphreys [Watson, D., Humphreys, G.W., 1997. Visual marking: prioritizing selection for new objects by top-down attentional inhibition of old objects. Psychological Review 104, 90-122] presented evidence that new information in search tasks is prioritised by (amongst other processes) active ignoring of old items - a process they termed visual marking. In this paper we present, for the first time, an explicit computational model of visual marking using biologically plausible activation functions. The "spiking search over time and space" model (sSoTS) incorporates different synaptic components (NMDA, AMPA, GABA) and a frequency adaptation mechanism based on [Ca(2+)] sensitive K(+) current. This frequency adaptation current can act as a mechanism that suppresses the previously attended items. We show that, when coupled with a process of active inhibition applied to old items, frequency adaptation leads to old items being de-prioritised (and new items prioritised) across time in search. Furthermore, the time course of these processes mimics the time course of the preview effect in human search. The results indicate that the sSoTS model can provide a biologically plausible account of human search over time as well as space.
Chromatic information and feature detection in fast visual analysis
Del Viva, Maria M.; Punzi, Giovanni; Shevell, Steven K.; ...
2016-08-01
The visual system is able to recognize a scene based on a sketch made of very simple features. This ability is likely crucial for survival, when fast image recognition is necessary, and it is believed that a primal sketch is extracted very early in the visual processing. Such highly simplified representations can be sufficient for accurate object discrimination, but an open question is the role played by color in this process. Rich color information is available in natural scenes, yet artist's sketches are usually monochromatic; and, black-andwhite movies provide compelling representations of real world scenes. Also, the contrast sensitivity ofmore » color is low at fine spatial scales. We approach the question from the perspective of optimal information processing by a system endowed with limited computational resources. We show that when such limitations are taken into account, the intrinsic statistical properties of natural scenes imply that the most effective strategy is to ignore fine-scale color features and devote most of the bandwidth to gray-scale information. We find confirmation of these information-based predictions from psychophysics measurements of fast-viewing discrimination of natural scenes. As a result, we conclude that the lack of colored features in our visual representation, and our overall low sensitivity to high-frequency color components, are a consequence of an adaptation process, optimizing the size and power consumption of our brain for the visual world we live in.« less
Chromatic information and feature detection in fast visual analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Del Viva, Maria M.; Punzi, Giovanni; Shevell, Steven K.
The visual system is able to recognize a scene based on a sketch made of very simple features. This ability is likely crucial for survival, when fast image recognition is necessary, and it is believed that a primal sketch is extracted very early in the visual processing. Such highly simplified representations can be sufficient for accurate object discrimination, but an open question is the role played by color in this process. Rich color information is available in natural scenes, yet artist's sketches are usually monochromatic; and, black-andwhite movies provide compelling representations of real world scenes. Also, the contrast sensitivity ofmore » color is low at fine spatial scales. We approach the question from the perspective of optimal information processing by a system endowed with limited computational resources. We show that when such limitations are taken into account, the intrinsic statistical properties of natural scenes imply that the most effective strategy is to ignore fine-scale color features and devote most of the bandwidth to gray-scale information. We find confirmation of these information-based predictions from psychophysics measurements of fast-viewing discrimination of natural scenes. As a result, we conclude that the lack of colored features in our visual representation, and our overall low sensitivity to high-frequency color components, are a consequence of an adaptation process, optimizing the size and power consumption of our brain for the visual world we live in.« less
Machine vision systems using machine learning for industrial product inspection
NASA Astrophysics Data System (ADS)
Lu, Yi; Chen, Tie Q.; Chen, Jie; Zhang, Jian; Tisler, Anthony
2002-02-01
Machine vision inspection requires efficient processing time and accurate results. In this paper, we present a machine vision inspection architecture, SMV (Smart Machine Vision). SMV decomposes a machine vision inspection problem into two stages, Learning Inspection Features (LIF), and On-Line Inspection (OLI). The LIF is designed to learn visual inspection features from design data and/or from inspection products. During the OLI stage, the inspection system uses the knowledge learnt by the LIF component to inspect the visual features of products. In this paper we will present two machine vision inspection systems developed under the SMV architecture for two different types of products, Printed Circuit Board (PCB) and Vacuum Florescent Displaying (VFD) boards. In the VFD board inspection system, the LIF component learns inspection features from a VFD board and its displaying patterns. In the PCB board inspection system, the LIF learns the inspection features from the CAD file of a PCB board. In both systems, the LIF component also incorporates interactive learning to make the inspection system more powerful and efficient. The VFD system has been deployed successfully in three different manufacturing companies and the PCB inspection system is the process of being deployed in a manufacturing plant.
Orme, Elizabeth; Brown, Louise A.; Riby, Leigh M.
2017-01-01
In this study, we examined electrophysiological indices of episodic remembering whilst participants recalled novel shapes, with and without semantic content, within a visual working memory paradigm. The components of interest were the parietal episodic (PE; 400–800 ms) and late posterior negativity (LPN; 500–900 ms), as these have previously been identified as reliable markers of recollection and post-retrieval monitoring, respectively. Fifteen young adults completed a visual matrix patterns task, assessing memory for low and high semantic visual representations. Matrices with either low semantic or high semantic content (containing familiar visual forms) were briefly presented to participants for study (1500 ms), followed by a retention interval (6000 ms) and finally a same/different recognition phase. The event-related potentials of interest were tracked from the onset of the recognition test stimuli. Analyses revealed equivalent amplitude for the earlier PE effect for the processing of both low and high semantic stimulus types. However, the LPN was more negative-going for the processing of the low semantic stimuli. These data are discussed in terms of relatively ‘pure’ and complete retrieval of high semantic items, where support can readily be recruited from semantic memory. However, for the low semantic items additional executive resources, as indexed by the LPN, are recruited when memory monitoring and uncertainty exist in order to recall previously studied items more effectively. PMID:28725203
Orme, Elizabeth; Brown, Louise A; Riby, Leigh M
2017-01-01
In this study, we examined electrophysiological indices of episodic remembering whilst participants recalled novel shapes, with and without semantic content, within a visual working memory paradigm. The components of interest were the parietal episodic (PE; 400-800 ms) and late posterior negativity (LPN; 500-900 ms), as these have previously been identified as reliable markers of recollection and post-retrieval monitoring, respectively. Fifteen young adults completed a visual matrix patterns task, assessing memory for low and high semantic visual representations. Matrices with either low semantic or high semantic content (containing familiar visual forms) were briefly presented to participants for study (1500 ms), followed by a retention interval (6000 ms) and finally a same/different recognition phase. The event-related potentials of interest were tracked from the onset of the recognition test stimuli. Analyses revealed equivalent amplitude for the earlier PE effect for the processing of both low and high semantic stimulus types. However, the LPN was more negative-going for the processing of the low semantic stimuli. These data are discussed in terms of relatively 'pure' and complete retrieval of high semantic items, where support can readily be recruited from semantic memory. However, for the low semantic items additional executive resources, as indexed by the LPN, are recruited when memory monitoring and uncertainty exist in order to recall previously studied items more effectively.
Impaired early visual response modulations to spatial information in chronic schizophrenia
Knebel, Jean-François; Javitt, Daniel C.; Murray, Micah M.
2011-01-01
Early visual processing stages have been demonstrated to be impaired in schizophrenia patients and their first-degree relatives. The amplitude and topography of the P1 component of the visual evoked potential (VEP) are both affected; the latter of which indicates alterations in active brain networks between populations. At least two issues remain unresolved. First, the specificity of this deficit (and suitability as an endophenotype) has yet to be established, with evidence for impaired P1 responses in other clinical populations. Second, it remains unknown whether schizophrenia patients exhibit intact functional modulation of the P1 VEP component; an aspect that may assist in distinguishing effects specific to schizophrenia. We applied electrical neuroimaging analyses to VEPs from chronic schizophrenia patients and healthy controls in response to variation in the parafoveal spatial extent of stimuli. Healthy controls demonstrated robust modulation of the VEP strength and topography as a function of the spatial extent of stimuli during the P1 component. By contrast, no such modulations were evident at early latencies in the responses from patients with schizophrenia. Source estimations localized these deficits to the left precuneus and medial inferior parietal cortex. These findings provide insights on potential underlying low-level impairments in schizophrenia. PMID:21764264
Graphical Language for Data Processing
NASA Technical Reports Server (NTRS)
Alphonso, Keith
2011-01-01
A graphical language for processing data allows processing elements to be connected with virtual wires that represent data flows between processing modules. The processing of complex data, such as lidar data, requires many different algorithms to be applied. The purpose of this innovation is to automate the processing of complex data, such as LIDAR, without the need for complex scripting and programming languages. The system consists of a set of user-interface components that allow the user to drag and drop various algorithmic and processing components onto a process graph. By working graphically, the user can completely visualize the process flow and create complex diagrams. This innovation supports the nesting of graphs, such that a graph can be included in another graph as a single step for processing. In addition to the user interface components, the system includes a set of .NET classes that represent the graph internally. These classes provide the internal system representation of the graphical user interface. The system includes a graph execution component that reads the internal representation of the graph (as described above) and executes that graph. The execution of the graph follows the interpreted model of execution in that each node is traversed and executed from the original internal representation. In addition, there are components that allow external code elements, such as algorithms, to be easily integrated into the system, thus making the system infinitely expandable.
Visual enhancement of images of natural resources: Applications in geology
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Neto, G.; Araujo, E. O.; Mascarenhas, N. D. A.; Desouza, R. C. M.
1980-01-01
The principal components technique for use in multispectral scanner LANDSAT data processing results in optimum dimensionality reduction. A powerful tool for MSS IMAGE enhancement, the method provides a maximum impression of terrain ruggedness; this fact makes the technique well suited for geological analysis.
Enhancing calibrated peer review for improved engineering communication education.
DOT National Transportation Integrated Search
2008-01-01
The objectives of this study are to extend Calibrated Peer Review (CPR) to allow for the input and review of visual and verbal components to the process, develop assignments in a set of core engineering courses that use these facilities, assess the i...
NASA Technical Reports Server (NTRS)
Pavel, M.
1993-01-01
The topics covered include the following: a system overview of the basic components of a system designed to improve the ability of a pilot to fly through low-visibility conditions such as fog; the role of visual sciences; fusion issues; sensor characterization; sources of information; image processing; and image fusion.
Interactions Dominate the Dynamics of Visual Cognition
ERIC Educational Resources Information Center
Stephen, Damian G.; Mirman, Daniel
2010-01-01
Many cognitive theories have described behavior as the summation of independent contributions from separate components. Contrasting views have emphasized the importance of multiplicative interactions and emergent structure. We describe a statistical approach to distinguishing additive and multiplicative processes and apply it to the dynamics of…
Schwaibold, M; Schöller, B; Penzel, T; Bolz, A
2001-05-01
We describe a novel approach to the problem of automated sleep stage recognition. The ARTISANA algorithm mimics the behaviour of a human expert visually scoring sleep stages (Rechtschaffen and Kales classification). It comprises a number of interacting components that imitate the stepwise approach of the human expert, and artificial intelligence components. On the basis of parameters extracted at 1-s intervals from the signal curves, artificial neural networks recognize the incidence of typical patterns, e.g. delta activity or K complexes. This is followed by a rule interpretation stage that identifies the sleep stage with the aid of a neuro-fuzzy system while taking account of the context. Validation studies based on the records of 8 patients with obstructive sleep apnoea have confirmed the potential of this approach. Further features of the system include the transparency of the decision-taking process, and the flexibility of the option for expanding the system to cover new patterns and criteria.
Influence of hypoglycaemia, with or without caffeine ingestion, on visual sensation and performance.
Owen, G; Watson, J; McGown, A; Sharma, S; Deary, I; Kerr, D; Barrett, G
2001-06-01
Full-field visual evoked potentials and visual information processing were measured in 16 normal, healthy subjects during a hyperinsulinaemic clamp. A randomized cross-over design was used across three conditions: hypoglycaemia and caffeine; hypoglycaemia and placebo; and euglycaemia and caffeine. The latency of the P100 component of the pattern-reversal visual evoked potential increased significantly from rest to hypoglycaemia, but no effect of caffeine was found. Subjects were subsequently divided into two median groups based on the increase in P100 latency in the placebo condition (Group 1, +0.5 ms; Group 2, +5.6 ms). In the absence of caffeine, an inverse correlation between the increase in P100 latency from rest and a deterioration in visual movement detection was found for Group 2, but not for Group 1. Caffeine ingestion resulted in a further increase in P100 latency, from rest to hypoglycaemia, for subjects in Group 2. Hypoglycaemia in the absence of caffeine produces changes in visual sensation from rest to hypoglycaemia. In those subjects most sensitive to the effects of hypoglycaemia (Group 2), the increase in P100 latency was associated with poorer performance in tests of visual information processing. Caffeine ingestion produced further increases in P100 latency in these subjects.
Töllner, Thomas; Müller, Hermann J; Zehetleitner, Michael
2012-07-01
Visual search for feature singletons is slowed when a task-irrelevant, but more salient distracter singleton is concurrently presented. While there is a consensus that this distracter interference effect can be influenced by internal system settings, it remains controversial at what stage of processing this influence starts to affect visual coding. Advocates of the "stimulus-driven" view maintain that the initial sweep of visual processing is entirely driven by physical stimulus attributes and that top-down settings can bias visual processing only after selection of the most salient item. By contrast, opponents argue that top-down expectancies can alter the initial selection priority, so that focal attention is "not automatically" shifted to the location exhibiting the highest feature contrast. To precisely trace the allocation of focal attention, we analyzed the Posterior-Contralateral-Negativity (PCN) in a task in which the likelihood (expectancy) with which a distracter occurred was systematically varied. Our results show that both high (vs. low) distracter expectancy and experiencing a distracter on the previous trial speed up the timing of the target-elicited PCN. Importantly, there was no distracter-elicited PCN, indicating that participants did not shift attention to the distracter before selecting the target. This pattern unambiguously demonstrates that preattentive vision is top-down modifiable.
Local spatio-temporal analysis in vision systems
NASA Astrophysics Data System (ADS)
Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David
1994-07-01
The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.
1982-03-01
are two qualitatively different forms of human information processing (James, 1890; Hasher & Zacks, 1979; LaBerge , 1973, 1975; Logan, 1978, 1979...Kristofferson, M. W. When item recognition and visual search functions are similar. Perception & Psychophysics, 1972, 12, 379-384. LaBerge , D. Attention and...the measurement of perceptual learning. Hemory and3 Conition, 1973, 1, 263-276. LaBerge , D. Acquisition of automatic processing in purceptual and
Stekelenburg, Jeroen J; Keetels, Mirjam
2016-05-01
The Colavita effect refers to the phenomenon that when confronted with an audiovisual stimulus, observers report more often to have perceived the visual than the auditory component. The Colavita effect depends on low-level stimulus factors such as spatial and temporal proximity between the unimodal signals. Here, we examined whether the Colavita effect is modulated by synesthetic congruency between visual size and auditory pitch. If the Colavita effect depends on synesthetic congruency, we expect a larger Colavita effect for synesthetically congruent size/pitch (large visual stimulus/low-pitched tone; small visual stimulus/high-pitched tone) than synesthetically incongruent (large visual stimulus/high-pitched tone; small visual stimulus/low-pitched tone) combinations. Participants had to identify stimulus type (visual, auditory or audiovisual). The study replicated the Colavita effect because participants reported more often the visual than auditory component of the audiovisual stimuli. Synesthetic congruency had, however, no effect on the magnitude of the Colavita effect. EEG recordings to congruent and incongruent audiovisual pairings showed a late frontal congruency effect at 400-550 ms and an occipitoparietal effect at 690-800 ms with neural sources in the anterior cingulate and premotor cortex for the 400- to 550-ms window and premotor cortex, inferior parietal lobule and the posterior middle temporal gyrus for the 690- to 800-ms window. The electrophysiological data show that synesthetic congruency was probably detected in a processing stage subsequent to the Colavita effect. We conclude that-in a modality detection task-the Colavita effect can be modulated by low-level structural factors but not by higher-order associations between auditory and visual inputs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richardson, Gregory D; Goodall, John R; Steed, Chad A
In developing visualizations for different data sets, the end solution often become dependent on the data being visualized. This causes engineers to have to re-develop many common components multiple times. The vis-react components library was designed to help enable creating visualizations that are independent of the underlying data. This library utilizes the React.js pattern of instantiating components that may be re-used. The library exposes an example application that allows other developers to understand how to use the components in the library.
A new method for text detection and recognition in indoor scene for assisting blind people
NASA Astrophysics Data System (ADS)
Jabnoun, Hanen; Benzarti, Faouzi; Amiri, Hamid
2017-03-01
Developing assisting system of handicapped persons become a challenging ask in research projects. Recently, a variety of tools are designed to help visually impaired or blind people object as a visual substitution system. The majority of these tools are based on the conversion of input information into auditory or tactile sensory information. Furthermore, object recognition and text retrieval are exploited in the visual substitution systems. Text detection and recognition provides the description of the surrounding environments, so that the blind person can readily recognize the scene. In this work, we aim to introduce a method for detecting and recognizing text in indoor scene. The process consists on the detection of the regions of interest that should contain the text using the connected component. Then, the text detection is provided by employing the images correlation. This component of an assistive blind person should be simple, so that the users are able to obtain the most informative feedback within the shortest time.
Distinctive Correspondence Between Separable Visual Attention Functions and Intrinsic Brain Networks
Ruiz-Rizzo, Adriana L.; Neitzel, Julia; Müller, Hermann J.; Sorg, Christian; Finke, Kathrin
2018-01-01
Separable visual attention functions are assumed to rely on distinct but interacting neural mechanisms. Bundesen's “theory of visual attention” (TVA) allows the mathematical estimation of independent parameters that characterize individuals' visual attentional capacity (i.e., visual processing speed and visual short-term memory storage capacity) and selectivity functions (i.e., top-down control and spatial laterality). However, it is unclear whether these parameters distinctively map onto different brain networks obtained from intrinsic functional connectivity, which organizes slowly fluctuating ongoing brain activity. In our study, 31 demographically homogeneous healthy young participants performed whole- and partial-report tasks and underwent resting-state functional magnetic resonance imaging (rs-fMRI). Report accuracy was modeled using TVA to estimate, individually, the four TVA parameters. Networks encompassing cortical areas relevant for visual attention were derived from independent component analysis of rs-fMRI data: visual, executive control, right and left frontoparietal, and ventral and dorsal attention networks. Two TVA parameters were mapped on particular functional networks. First, participants with higher (vs. lower) visual processing speed showed lower functional connectivity within the ventral attention network. Second, participants with more (vs. less) efficient top-down control showed higher functional connectivity within the dorsal attention network and lower functional connectivity within the visual network. Additionally, higher performance was associated with higher functional connectivity between networks: specifically, between the ventral attention and right frontoparietal networks for visual processing speed, and between the visual and executive control networks for top-down control. The higher inter-network functional connectivity was related to lower intra-network connectivity. These results demonstrate that separable visual attention parameters that are assumed to constitute relatively stable traits correspond distinctly to the functional connectivity both within and between particular functional networks. This implies that individual differences in basic attention functions are represented by differences in the coherence of slowly fluctuating brain activity. PMID:29662444
Ruiz-Rizzo, Adriana L; Neitzel, Julia; Müller, Hermann J; Sorg, Christian; Finke, Kathrin
2018-01-01
Separable visual attention functions are assumed to rely on distinct but interacting neural mechanisms. Bundesen's "theory of visual attention" (TVA) allows the mathematical estimation of independent parameters that characterize individuals' visual attentional capacity (i.e., visual processing speed and visual short-term memory storage capacity) and selectivity functions (i.e., top-down control and spatial laterality). However, it is unclear whether these parameters distinctively map onto different brain networks obtained from intrinsic functional connectivity, which organizes slowly fluctuating ongoing brain activity. In our study, 31 demographically homogeneous healthy young participants performed whole- and partial-report tasks and underwent resting-state functional magnetic resonance imaging (rs-fMRI). Report accuracy was modeled using TVA to estimate, individually, the four TVA parameters. Networks encompassing cortical areas relevant for visual attention were derived from independent component analysis of rs-fMRI data: visual, executive control, right and left frontoparietal, and ventral and dorsal attention networks. Two TVA parameters were mapped on particular functional networks. First, participants with higher (vs. lower) visual processing speed showed lower functional connectivity within the ventral attention network. Second, participants with more (vs. less) efficient top-down control showed higher functional connectivity within the dorsal attention network and lower functional connectivity within the visual network. Additionally, higher performance was associated with higher functional connectivity between networks: specifically, between the ventral attention and right frontoparietal networks for visual processing speed, and between the visual and executive control networks for top-down control. The higher inter-network functional connectivity was related to lower intra-network connectivity. These results demonstrate that separable visual attention parameters that are assumed to constitute relatively stable traits correspond distinctly to the functional connectivity both within and between particular functional networks. This implies that individual differences in basic attention functions are represented by differences in the coherence of slowly fluctuating brain activity.
Sanada, Motoyuki; Ikeda, Koki; Kimura, Kenta; Hasegawa, Toshikazu
2013-09-01
Motivation is well known to enhance working memory (WM) capacity, but the mechanism underlying this effect remains unclear. The WM process can be divided into encoding, maintenance, and retrieval, and in a change detection visual WM paradigm, the encoding and retrieval processes can be subdivided into perceptual and central processing. To clarify which of these segments are most influenced by motivation, we measured ERPs in a change detection task with differential monetary rewards. The results showed that the enhancement of WM capacity under high motivation was accompanied by modulations of late central components but not those reflecting attentional control on perceptual inputs across all stages of WM. We conclude that the "state-dependent" shift of motivation impacted the central, rather than the perceptual functions in order to achieve better behavioral performances. Copyright © 2013 Society for Psychophysiological Research.
Binding in visual working memory: the role of the episodic buffer.
Baddeley, Alan D; Allen, Richard J; Hitch, Graham J
2011-05-01
The episodic buffer component of working memory is assumed to play a central role in the binding of features into objects, a process that was initially assumed to depend upon executive resources. Here, we review a program of work in which we specifically tested this assumption by studying the effects of a range of attentionally demanding concurrent tasks on the capacity to encode and retain both individual features and bound objects. We found no differential effect of concurrent load, even when the process of binding was made more demanding by separating the shape and color features spatially, temporally or across visual and auditory modalities. Bound features were however more readily disrupted by subsequent stimuli, a process we studied using a suffix paradigm. This suggested a need to assume a feature-based attentional filter followed by an object based storage process. Our results are interpreted within a modified version of the multicomponent working memory model. We also discuss work examining the role of the hippocampus in visual feature binding. Copyright © 2011 Elsevier Ltd. All rights reserved.
Aging effects on selective attention-related electroencephalographic patterns during face encoding.
Deiber, M-P; Rodriguez, C; Jaques, D; Missonnier, P; Emch, J; Millet, P; Gold, G; Giannakopoulos, P; Ibañez, V
2010-11-24
Previous electrophysiological studies revealed that human faces elicit an early visual event-related potential (ERP) within the occipito-temporal cortex, the N170 component. Although face perception has been proposed to rely on automatic processing, the impact of selective attention on N170 remains controversial both in young and elderly individuals. Using early visual ERP and alpha power analysis, we assessed the influence of aging on selective attention to faces during delayed-recognition tasks for face and letter stimuli, examining 36 elderly and 20 young adults with preserved cognition. Face recognition performance worsened with age. Aging induced a latency delay of the N1 component for faces and letters, as well as of the face N170 component. Contrasting with letters, ignored faces elicited larger N1 and N170 components than attended faces in both age groups. This counterintuitive attention effect on face processing persisted when scenes replaced letters. In contrast with young, elderly subjects failed to suppress irrelevant letters when attending faces. Whereas attended stimuli induced a parietal alpha band desynchronization within 300-1000 ms post-stimulus with bilateral-to-right distribution for faces and left lateralization for letters, ignored and passively viewed stimuli elicited a central alpha synchronization larger on the right hemisphere. Aging delayed the latency of this alpha synchronization for both face and letter stimuli, and reduced its amplitude for ignored letters. These results suggest that due to their social relevance, human faces may cause paradoxical attention effects on early visual ERP components, but they still undergo classical top-down control as a function of endogenous selective attention. Aging does not affect the face bottom-up alerting mechanism but reduces the top-down suppression of distracting letters, possibly impinging upon face recognition, and more generally delays the top-down suppression of task-irrelevant information. Copyright © 2010 IBRO. Published by Elsevier Ltd. All rights reserved.
Deformable known component model-based reconstruction for coronary CT angiography
NASA Astrophysics Data System (ADS)
Zhang, X.; Tilley, S.; Xu, S.; Mathews, A.; McVeigh, E. R.; Stayman, J. W.
2017-03-01
Purpose: Atherosclerosis detection remains challenging in coronary CT angiography for patients with cardiac implants. Pacing electrodes of a pacemaker or lead components of a defibrillator can create substantial blooming and streak artifacts in the heart region, severely hindering the visualization of a plaque of interest. We present a novel reconstruction method that incorporates a deformable model for metal leads to eliminate metal artifacts and improve anatomy visualization even near the boundary of the component. Methods: The proposed reconstruction method, referred as STF-dKCR, includes a novel parameterization of the component that integrates deformation, a 3D-2D preregistration process that estimates component shape and position, and a polyenergetic forward model for x-ray propagation through the component where the spectral properties are jointly estimated. The methodology was tested on physical data of a cardiac phantom acquired on a CBCT testbench. The phantom included a simulated vessel, a metal wire emulating a pacing lead, and a small Teflon sphere attached to the vessel wall, mimicking a calcified plaque. The proposed method was also compared to the traditional FBP reconstruction and an interpolation-based metal correction method (FBP-MAR). Results: Metal artifacts presented in standard FBP reconstruction were significantly reduced in both FBP-MAR and STF- dKCR, yet only the STF-dKCR approach significantly improved the visibility of the small Teflon target (within 2 mm of the metal wire). The attenuation of the Teflon bead improved to 0.0481 mm-1 with STF-dKCR from 0.0166 mm-1 with FBP and from 0.0301 mm-1 with FBP-MAR - much closer to the expected 0.0414 mm-1. Conclusion: The proposed method has the potential to improve plaque visualization in coronary CT angiography in the presence of wire-shaped metal components.
The guidance of visual search by shape features and shape configurations.
McCants, Cody W; Berggren, Nick; Eimer, Martin
2018-03-01
Representations of target features (attentional templates) guide attentional object selection during visual search. In many search tasks, targets objects are defined not by a single feature but by the spatial configuration of their component shapes. We used electrophysiological markers of attentional selection processes to determine whether the guidance of shape configuration search is entirely part-based or sensitive to the spatial relationship between shape features. Participants searched for targets defined by the spatial arrangement of two shape components (e.g., hourglass above circle). N2pc components were triggered not only by targets but also by partially matching distractors with one target shape (e.g., hourglass above hexagon) and by distractors that contained both target shapes in the reverse arrangement (e.g., circle above hourglass), in line with part-based attentional control. Target N2pc components were delayed when a reverse distractor was present on the opposite side of the same display, suggesting that early shape-specific attentional guidance processes could not distinguish between targets and reverse distractors. The control of attention then became sensitive to spatial configuration, which resulted in a stronger attentional bias for target objects relative to reverse and partially matching distractors. Results demonstrate that search for target objects defined by the spatial arrangement of their component shapes is initially controlled in a feature-based fashion but can later be guided by templates for spatial configurations. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Remembering the dynamic changes in pain intensity and unpleasantness: a psychophysical study.
Khoshnejad, Mina; Fortin, Marie C; Rohani, Farzan; Duncan, Gary H; Rainville, Pierre
2014-03-01
This study investigated the short-term memory of dynamic changes in acute pain using psychophysical methods. Pain intensity or unpleasantness induced by painful contact-heat stimuli of 8, 9, or 10s was rated continuously during the stimulus or after a 14-s delay using an electronic visual analog scale in 10 healthy volunteers. Because the continuous visual analog scale time courses contained large amounts of redundant information, a principal component analysis was applied to characterize the main features inherent to both the concurrent rating and retrospective evaluations. Three components explained about 90% of the total variance across all trials and subjects, with the first component reflecting the global perceptual profile, and the second and third components explaining finer perceptual aspects (eg, changes in slope at onset and offset and shifts in peak latency). We postulate that these 3 principal components may provide some information about the structure of the mental representations of what one perceives, stores, and remembers during the course of few seconds. Analysis performed on the components confirmed significant memory distortions and revealed that the discriminative information about pain dimensions in concurrent ratings was partly or completely lost in retrospective ratings. Importantly, our results highlight individual differences affecting these memory processes. These results provide further evidence of the important transformations underlying the processing of pain in explicit memory and raise fundamental questions about the conversion of dynamic nociceptive signals into a mental representation of pain in perception and memory. Copyright © 2013 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.
The Auditory-Visual Speech Benefit on Working Memory in Older Adults with Hearing Impairment
Frtusova, Jana B.; Phillips, Natalie A.
2016-01-01
This study examined the effect of auditory-visual (AV) speech stimuli on working memory in older adults with poorer-hearing (PH) in comparison to age- and education-matched older adults with better hearing (BH). Participants completed a working memory n-back task (0- to 2-back) in which sequences of digits were presented in visual-only (i.e., speech-reading), auditory-only (A-only), and AV conditions. Auditory event-related potentials (ERP) were collected to assess the relationship between perceptual and working memory processing. The behavioral results showed that both groups were faster in the AV condition in comparison to the unisensory conditions. The ERP data showed perceptual facilitation in the AV condition, in the form of reduced amplitudes and latencies of the auditory N1 and/or P1 components, in the PH group. Furthermore, a working memory ERP component, the P3, peaked earlier for both groups in the AV condition compared to the A-only condition. In general, the PH group showed a more robust AV benefit; however, the BH group showed a dose-response relationship between perceptual facilitation and working memory improvement, especially for facilitation of processing speed. Two measures, reaction time and P3 amplitude, suggested that the presence of visual speech cues may have helped the PH group to counteract the demanding auditory processing, to the level that no group differences were evident during the AV modality despite lower performance during the A-only condition. Overall, this study provides support for the theory of an integrated perceptual-cognitive system. The practical significance of these findings is also discussed. PMID:27148106
NASA Astrophysics Data System (ADS)
Bektasli, Behzat
Graphs have a broad use in science classrooms, especially in physics. In physics, kinematics is probably the topic for which graphs are most widely used. The participants in this study were from two different grade-12 physics classrooms, advanced placement and calculus-based physics. The main purpose of this study was to search for the relationships between student spatial ability, logical thinking, mathematical achievement, and kinematics graphs interpretation skills. The Purdue Spatial Visualization Test, the Middle Grades Integrated Process Skills Test (MIPT), and the Test of Understanding Graphs in Kinematics (TUG-K) were used for quantitative data collection. Classroom observations were made to acquire ideas about classroom environment and instructional techniques. Factor analysis, simple linear correlation, multiple linear regression, and descriptive statistics were used to analyze the quantitative data. Each instrument has two principal components. The selection and calculation of the slope and of the area were the two principal components of TUG-K. MIPT was composed of a component based upon processing text and a second component based upon processing symbolic information. The Purdue Spatial Visualization Test was composed of a component based upon one-step processing and a second component based upon two-step processing of information. Student ability to determine the slope in a kinematics graph was significantly correlated with spatial ability, logical thinking, and mathematics aptitude and achievement. However, student ability to determine the area in a kinematics graph was only significantly correlated with student pre-calculus semester 2 grades. Male students performed significantly better than female students on the slope items of TUG-K. Also, male students performed significantly better than female students on the PSAT mathematics assessment and spatial ability. This study found that students have different levels of spatial ability, logical thinking, and mathematics aptitude and achievement levels. These different levels were related to student learning of kinematics and they need to be considered when kinematics is being taught. It might be easier for students to understand the kinematics graphs if curriculum developers include more activities related to spatial ability and logical thinking.
Neurophysiology and Neuroanatomy of Smooth Pursuit in Humans
ERIC Educational Resources Information Center
Lencer, Rebekka; Trillenberg, Peter
2008-01-01
Smooth pursuit eye movements enable us to focus our eyes on moving objects by utilizing well-established mechanisms of visual motion processing, sensorimotor transformation and cognition. Novel smooth pursuit tasks and quantitative measurement techniques can help unravel the different smooth pursuit components and complex neural systems involved…
Report on Component 2 - Designing New Methods for Visualizing Text in Spatial Contexts
2012-10-31
W9132V-11-P-0010 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Alexander Savelyev , Scott Pezanowski, Anthony C. Robinson, and Alan M...e Component 2 – Designing New Methods for Visualizing Text in Spatial Contexts Alexander Savelyev , Scott Pezanowski, Anthony Robinson and Alan...Center, Penn State University Report on Component 2: Component 2 – Designing New Methods for Visualizing Text in Spatial Contexts Alexander
Szécsi, László; Kacsó, Ágota; Zeck, Günther; Hantz, Péter
2017-01-01
Light stimulation with precise and complex spatial and temporal modulation is demanded by a series of research fields like visual neuroscience, optogenetics, ophthalmology, and visual psychophysics. We developed a user-friendly and flexible stimulus generating framework (GEARS GPU-based Eye And Retina Stimulation Software), which offers access to GPU computing power, and allows interactive modification of stimulus parameters during experiments. Furthermore, it has built-in support for driving external equipment, as well as for synchronization tasks, via USB ports. The use of GEARS does not require elaborate programming skills. The necessary scripting is visually aided by an intuitive interface, while the details of the underlying software and hardware components remain hidden. Internally, the software is a C++/Python hybrid using OpenGL graphics. Computations are performed on the GPU, and are defined in the GLSL shading language. However, all GPU settings, including the GPU shader programs, are automatically generated by GEARS. This is configured through a method encountered in game programming, which allows high flexibility: stimuli are straightforwardly composed using a broad library of basic components. Stimulus rendering is implemented solely in C++, therefore intermediary libraries for interfacing could be omitted. This enables the program to perform computationally demanding tasks like en-masse random number generation or real-time image processing by local and global operations.
Predicting perceptual learning from higher-order cortical processing.
Wang, Fang; Huang, Jing; Lv, Yaping; Ma, Xiaoli; Yang, Bin; Wang, Encong; Du, Boqi; Li, Wu; Song, Yan
2016-01-01
Visual perceptual learning has been shown to be highly specific to the retinotopic location and attributes of the trained stimulus. Recent psychophysical studies suggest that these specificities, which have been associated with early retinotopic visual cortex, may in fact not be inherent in perceptual learning and could be related to higher-order brain functions. Here we provide direct electrophysiological evidence in support of this proposition. In a series of event-related potential (ERP) experiments, we recorded high-density electroencephalography (EEG) from human adults over the course of learning in a texture discrimination task (TDT). The results consistently showed that the earliest C1 component (68-84ms), known to reflect V1 activity driven by feedforward inputs, was not modulated by learning regardless of whether the behavioral improvement is location specific or not. In contrast, two later posterior ERP components (posterior P1 and P160-350) over the occipital cortex and one anterior ERP component (anterior P160-350) over the prefrontal cortex were progressively modified day by day. Moreover, the change of the anterior component was closely correlated with improved behavioral performance on a daily basis. Consistent with recent psychophysical and imaging observations, our results indicate that perceptual learning can mainly involve changes in higher-level visual cortex as well as in the neural networks responsible for cognitive functions such as attention and decision making. Copyright © 2015 Elsevier Inc. All rights reserved.
Automated visual inspection system based on HAVNET architecture
NASA Astrophysics Data System (ADS)
Burkett, K.; Ozbayoglu, Murat A.; Dagli, Cihan H.
1994-10-01
In this study, the HAusdorff-Voronoi NETwork (HAVNET) developed at the UMR Smart Engineering Systems Lab is tested in the recognition of mounted circuit components commonly used in printed circuit board assembly systems. The automated visual inspection system used consists of a CCD camera, a neural network based image processing software and a data acquisition card connected to a PC. The experiments are run in the Smart Engineering Systems Lab in the Engineering Management Dept. of the University of Missouri-Rolla. The performance analysis shows that the vision system is capable of recognizing different components under uncontrolled lighting conditions without being effected by rotation or scale differences. The results obtained are promising and the system can be used in real manufacturing environments. Currently the system is being customized for a specific manufacturing application.
Action Intentions Modulate Allocation of Visual Attention: Electrophysiological Evidence
Wykowska, Agnieszka; Schubö, Anna
2012-01-01
In line with the Theory of Event Coding (Hommel et al., 2001), action planning has been shown to affect perceptual processing – an effect that has been attributed to a so-called intentional weighting mechanism (Wykowska et al., 2009; Hommel, 2010). This paper investigates the electrophysiological correlates of action-related modulations of selection mechanisms in visual perception. A paradigm combining a visual search task for size and luminance targets with a movement task (grasping or pointing) was introduced, and the EEG was recorded while participants were performing the tasks. The results showed that the behavioral congruency effects, i.e., better performance in congruent (relative to incongruent) action-perception trials have been reflected by a modulation of the P1 component as well as the N2pc (an ERP marker of spatial attention). These results support the argumentation that action planning modulates already early perceptual processing and attention mechanisms. PMID:23060841
Event-related potentials during visual selective attention in children of alcoholics.
van der Stelt, O; Gunning, W B; Snel, J; Kok, A
1998-12-01
Event-related potentials were recorded from 7- to 18-year-old children of alcoholics (COAs, n = 50) and age- and sex-matched control children (n = 50) while they performed a visual selective attention task. The task was to attend selectively to stimuli with a specified color (red or blue) in an attempt to detect the occurrence of target stimuli. COAs manifested a smaller P3b amplitude to attended-target stimuli over the parietal and occipital scalp than did the controls. A more specific analysis indicated that both the attentional relevance and the target properties of the eliciting stimulus determined the observed P3b amplitude differences between COAs and controls. In contrast, no significant group differences were observed in attention-related earlier occurring event-related potential components, referred to as frontal selection positivity, selection negativity, and N2b. These results represent neurophysiological evidence that COAs suffer from deficits at a late (semantic) level of visual selective information processing that are unlikely a consequence of deficits at earlier (sensory) levels of selective processing. The findings support the notion that a reduced visual P3b amplitude in COAs represents a high-level processing dysfunction indicating their increased vulnerability to alcoholism.
Dynamic visual attention: motion direction versus motion magnitude
NASA Astrophysics Data System (ADS)
Bur, A.; Wurtz, P.; Müri, R. M.; Hügli, H.
2008-02-01
Defined as an attentive process in the context of visual sequences, dynamic visual attention refers to the selection of the most informative parts of video sequence. This paper investigates the contribution of motion in dynamic visual attention, and specifically compares computer models designed with the motion component expressed either as the speed magnitude or as the speed vector. Several computer models, including static features (color, intensity and orientation) and motion features (magnitude and vector) are considered. Qualitative and quantitative evaluations are performed by comparing the computer model output with human saliency maps obtained experimentally from eye movement recordings. The model suitability is evaluated in various situations (synthetic and real sequences, acquired with fixed and moving camera perspective), showing advantages and inconveniences of each method as well as preferred domain of application.
Combined Electrophysiological and Behavioral Evidence for the Suppression of Salient Distractors.
Gaspelin, Nicholas; Luck, Steven J
2018-05-15
Researchers have long debated how salient-but-irrelevant features guide visual attention. Pure stimulus-driven theories claim that salient stimuli automatically capture attention irrespective of goals, whereas pure goal-driven theories propose that an individual's attentional control settings determine whether salient stimuli capture attention. However, recent studies have suggested a hybrid model in which salient stimuli attract visual attention but can be actively suppressed by top-down attentional mechanisms. Support for this hybrid model has primarily come from ERP studies demonstrating that salient stimuli, which fail to capture attention, also elicit a distractor positivity (P D ) component, a putative neural index of suppression. Other support comes from a handful of behavioral studies showing that processing at the salient locations is inhibited compared with other locations. The current study was designed to link the behavioral and neural evidence by combining ERP recordings with an experimental paradigm that provides a behavioral measure of suppression. We found that, when a salient distractor item elicited the P D component, processing at the location of this distractor was suppressed below baseline levels. Furthermore, the magnitude of behavioral suppression and the magnitude of the P D component covaried across participants. These findings provide a crucial connection between the behavioral and neural measures of suppression, which opens the door to using the P D component to assess the timing and neural substrates of the behaviorally observed suppression.
A component-based software environment for visualizing large macromolecular assemblies.
Sanner, Michel F
2005-03-01
The interactive visualization of large biological assemblies poses a number of challenging problems, including the development of multiresolution representations and new interaction methods for navigating and analyzing these complex systems. An additional challenge is the development of flexible software environments that will facilitate the integration and interoperation of computational models and techniques from a wide variety of scientific disciplines. In this paper, we present a component-based software development strategy centered on the high-level, object-oriented, interpretive programming language: Python. We present several software components, discuss their integration, and describe some of their features that are relevant to the visualization of large molecular assemblies. Several examples are given to illustrate the interoperation of these software components and the integration of structural data from a variety of experimental sources. These examples illustrate how combining visual programming with component-based software development facilitates the rapid prototyping of novel visualization tools.
Goal-Directed and Habit-Like Modulations of Stimulus Processing during Reinforcement Learning.
Luque, David; Beesley, Tom; Morris, Richard W; Jack, Bradley N; Griffiths, Oren; Whitford, Thomas J; Le Pelley, Mike E
2017-03-15
Recent research has shown that perceptual processing of stimuli previously associated with high-value rewards is automatically prioritized even when rewards are no longer available. It has been hypothesized that such reward-related modulation of stimulus salience is conceptually similar to an "attentional habit." Recording event-related potentials in humans during a reinforcement learning task, we show strong evidence in favor of this hypothesis. Resistance to outcome devaluation (the defining feature of a habit) was shown by the stimulus-locked P1 component, reflecting activity in the extrastriate visual cortex. Analysis at longer latencies revealed a positive component (corresponding to the P3b, from 550-700 ms) sensitive to outcome devaluation. Therefore, distinct spatiotemporal patterns of brain activity were observed corresponding to habitual and goal-directed processes. These results demonstrate that reinforcement learning engages both attentional habits and goal-directed processes in parallel. Consequences for brain and computational models of reinforcement learning are discussed. SIGNIFICANCE STATEMENT The human attentional network adapts to detect stimuli that predict important rewards. A recent hypothesis suggests that the visual cortex automatically prioritizes reward-related stimuli, driven by cached representations of reward value; that is, stimulus-response habits. Alternatively, the neural system may track the current value of the predicted outcome. Our results demonstrate for the first time that visual cortex activity is increased for reward-related stimuli even when the rewarding event is temporarily devalued. In contrast, longer-latency brain activity was specifically sensitive to transient changes in reward value. Therefore, we show that both habit-like attention and goal-directed processes occur in the same learning episode at different latencies. This result has important consequences for computational models of reinforcement learning. Copyright © 2017 the authors 0270-6474/17/373009-09$15.00/0.
Visual Exploration of Semantic Relationships in Neural Word Embeddings
Liu, Shusen; Bremer, Peer-Timo; Thiagarajan, Jayaraman J.; ...
2017-08-29
Constructing distributed representations for words through neural language models and using the resulting vector spaces for analysis has become a crucial component of natural language processing (NLP). But, despite their widespread application, little is known about the structure and properties of these spaces. To gain insights into the relationship between words, the NLP community has begun to adapt high-dimensional visualization techniques. Particularly, researchers commonly use t-distributed stochastic neighbor embeddings (t-SNE) and principal component analysis (PCA) to create two-dimensional embeddings for assessing the overall structure and exploring linear relationships (e.g., word analogies), respectively. Unfortunately, these techniques often produce mediocre or evenmore » misleading results and cannot address domain-specific visualization challenges that are crucial for understanding semantic relationships in word embeddings. We introduce new embedding techniques for visualizing semantic and syntactic analogies, and the corresponding tests to determine whether the resulting views capture salient structures. Additionally, we introduce two novel views for a comprehensive study of analogy relationships. Finally, we augment t-SNE embeddings to convey uncertainty information in order to allow a reliable interpretation. Combined, the different views address a number of domain-specific tasks difficult to solve with existing tools.« less
Visual Exploration of Semantic Relationships in Neural Word Embeddings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Shusen; Bremer, Peer-Timo; Thiagarajan, Jayaraman J.
Constructing distributed representations for words through neural language models and using the resulting vector spaces for analysis has become a crucial component of natural language processing (NLP). But, despite their widespread application, little is known about the structure and properties of these spaces. To gain insights into the relationship between words, the NLP community has begun to adapt high-dimensional visualization techniques. Particularly, researchers commonly use t-distributed stochastic neighbor embeddings (t-SNE) and principal component analysis (PCA) to create two-dimensional embeddings for assessing the overall structure and exploring linear relationships (e.g., word analogies), respectively. Unfortunately, these techniques often produce mediocre or evenmore » misleading results and cannot address domain-specific visualization challenges that are crucial for understanding semantic relationships in word embeddings. We introduce new embedding techniques for visualizing semantic and syntactic analogies, and the corresponding tests to determine whether the resulting views capture salient structures. Additionally, we introduce two novel views for a comprehensive study of analogy relationships. Finally, we augment t-SNE embeddings to convey uncertainty information in order to allow a reliable interpretation. Combined, the different views address a number of domain-specific tasks difficult to solve with existing tools.« less
fMRI evidence for sensorimotor transformations in human cortex during smooth pursuit eye movements.
Kimmig, H; Ohlendorf, S; Speck, O; Sprenger, A; Rutschmann, R M; Haller, S; Greenlee, M W
2008-01-01
Smooth pursuit eye movements (SP) are driven by moving objects. The pursuit system processes the visual input signals and transforms this information into an oculomotor output signal. Despite the object's movement on the retina and the eyes' movement in the head, we are able to locate the object in space implying coordinate transformations from retinal to head and space coordinates. To test for the visual and oculomotor components of SP and the possible transformation sites, we investigated three experimental conditions: (I) fixation of a stationary target with a second target moving across the retina (visual), (II) pursuit of the moving target with the second target moving in phase (oculomotor), (III) pursuit of the moving target with the second target remaining stationary (visuo-oculomotor). Precise eye movement data were simultaneously measured with the fMRI data. Visual components of activation during SP were located in the motion-sensitive, temporo-parieto-occipital region MT+ and the right posterior parietal cortex (PPC). Motor components comprised more widespread activation in these regions and additional activations in the frontal and supplementary eye fields (FEF, SEF), the cingulate gyrus and precuneus. The combined visuo-oculomotor stimulus revealed additional activation in the putamen. Possible transformation sites were found in MT+ and PPC. The MT+ activation evoked by the motion of a single visual dot was very localized, while the activation of the same single dot motion driving the eye was rather extended across MT+. The eye movement information appeared to be dispersed across the visual map of MT+. This could be interpreted as a transfer of the one-dimensional eye movement information into the two-dimensional visual map. Potentially, the dispersed information could be used to remap MT+ to space coordinates rather than retinal coordinates and to provide the basis for a motor output control. A similar interpretation holds for our results in the PPC region.
Towards a visual modeling approach to designing microelectromechanical system transducers
NASA Astrophysics Data System (ADS)
Dewey, Allen; Srinivasan, Vijay; Icoz, Evrim
1999-12-01
In this paper, we address initial design capture and system conceptualization of microelectromechanical system transducers based on visual modeling and design. Visual modeling frames the task of generating hardware description language (analog and digital) component models in a manner similar to the task of generating software programming language applications. A structured topological design strategy is employed, whereby microelectromechanical foundry cell libraries are utilized to facilitate the design process of exploring candidate cells (topologies), varying key aspects of the transduction for each topology, and determining which topology best satisfies design requirements. Coupled-energy microelectromechanical system characterizations at a circuit level of abstraction are presented that are based on branch constitutive relations and an overall system of simultaneous differential and algebraic equations. The resulting design methodology is called visual integrated-microelectromechanical VHDL-AMS interactive design (VHDL-AMS is visual hardware design language for analog and mixed signal).
The functional neuroanatomy of multitasking: combining dual tasking with a short term memory task.
Deprez, Sabine; Vandenbulcke, Mathieu; Peeters, Ron; Emsell, Louise; Amant, Frederic; Sunaert, Stefan
2013-09-01
Insight into the neural architecture of multitasking is crucial when investigating the pathophysiology of multitasking deficits in clinical populations. Presently, little is known about how the brain combines dual-tasking with a concurrent short-term memory task, despite the relevance of this mental operation in daily life and the frequency of complaints related to this process, in disease. In this study we aimed to examine how the brain responds when a memory task is added to dual-tasking. Thirty-three right-handed healthy volunteers (20 females, mean age 39.9 ± 5.8) were examined with functional brain imaging (fMRI). The paradigm consisted of two cross-modal single tasks (a visual and auditory temporal same-different task with short delay), a dual-task combining both single tasks simultaneously and a multi-task condition, combining the dual-task with an additional short-term memory task (temporal same-different visual task with long delay). Dual-tasking compared to both individual visual and auditory single tasks activated a predominantly right-sided fronto-parietal network and the cerebellum. When adding the additional short-term memory task, a larger and more bilateral frontoparietal network was recruited. We found enhanced activity during multitasking in components of the network that were already involved in dual-tasking, suggesting increased working memory demands, as well as recruitment of multitask-specific components including areas that are likely to be involved in online holding of visual stimuli in short-term memory such as occipito-temporal cortex. These results confirm concurrent neural processing of a visual short-term memory task during dual-tasking and provide evidence for an effective fMRI multitasking paradigm. © 2013 Elsevier Ltd. All rights reserved.
Stekelenburg, Jeroen J; Keetels, Mirjam; Vroomen, Jean
2018-05-01
Numerous studies have demonstrated that the vision of lip movements can alter the perception of auditory speech syllables (McGurk effect). While there is ample evidence for integration of text and auditory speech, there are only a few studies on the orthographic equivalent of the McGurk effect. Here, we examined whether written text, like visual speech, can induce an illusory change in the perception of speech sounds on both the behavioural and neural levels. In a sound categorization task, we found that both text and visual speech changed the identity of speech sounds from an /aba/-/ada/ continuum, but the size of this audiovisual effect was considerably smaller for text than visual speech. To examine at which level in the information processing hierarchy these multisensory interactions occur, we recorded electroencephalography in an audiovisual mismatch negativity (MMN, a component of the event-related potential reflecting preattentive auditory change detection) paradigm in which deviant text or visual speech was used to induce an illusory change in a sequence of ambiguous sounds halfway between /aba/ and /ada/. We found that only deviant visual speech induced an MMN, but not deviant text, which induced a late P3-like positive potential. These results demonstrate that text has much weaker effects on sound processing than visual speech does, possibly because text has different biological roots than visual speech. © 2018 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Sex differences in adults' relative visual interest in female and male faces, toys, and play styles.
Alexander, Gerianne M; Charles, Nora
2009-06-01
An individual's reproductive potential appears to influence response to attractive faces of the opposite sex. Otherwise, relatively little is known about the characteristics of the adult observer that may influence his or her affective evaluation of male and female faces. An untested hypothesis (based on the proposed role of attractive faces in mate selection) is that most women would show greater interest in male faces whereas most men would show greater interest in female faces. Further, evidence from individuals with preferences for same-sex sexual partners suggests that response to attractive male and female faces may be influenced by gender-linked play preferences. To test these hypotheses, visual attention directed to sex-linked stimuli (faces, toys, play styles) was measured in 39 men and 44 women using eye tracking technology. Consistent with our predictions, men directed greater visual attention to all male-typical stimuli and visual attention to male and female faces was associated with visual attention to gender conforming or nonconforming stimuli in a manner consistent with previous research on sexual orientation. In contrast, women showed a visual preference for female-typical toys, but no visual preference for male faces or female-typical play styles. These findings indicate that sex differences in visual processing extend beyond stimuli associated with adult sexual behavior. We speculate that sex differences in visual processing are a component of the expression of gender phenotypes across the lifespan that may reflect sex differences in the motivational properties of gender-linked stimuli.
Nihei, Yuji; Minami, Tetsuto; Nakauchi, Shigeki
2018-01-01
Faces represent important information for social communication, because social information, such as face-color, expression, and gender, is obtained from faces. Therefore, individuals' tend to find faces unconsciously, even in objects. Why is face-likeness perceived in non-face objects? Previous event-related potential (ERP) studies showed that the P1 component (early visual processing), the N170 component (face detection), and the N250 component (personal detection) reflect the neural processing of faces. Inverted faces were reported to enhance the amplitude and delay the latency of P1 and N170. To investigate face-likeness processing in the brain, we explored the face-related components of the ERP through a face-like evaluation task using natural faces, cars, insects, and Arcimboldo paintings presented upright or inverted. We found a significant correlation between the inversion effect index and face-like scores in P1 in both hemispheres and in N170 in the right hemisphere. These results suggest that judgment of face-likeness occurs in a relatively early stage of face processing.
Nihei, Yuji; Minami, Tetsuto; Nakauchi, Shigeki
2018-01-01
Faces represent important information for social communication, because social information, such as face-color, expression, and gender, is obtained from faces. Therefore, individuals' tend to find faces unconsciously, even in objects. Why is face-likeness perceived in non-face objects? Previous event-related potential (ERP) studies showed that the P1 component (early visual processing), the N170 component (face detection), and the N250 component (personal detection) reflect the neural processing of faces. Inverted faces were reported to enhance the amplitude and delay the latency of P1 and N170. To investigate face-likeness processing in the brain, we explored the face-related components of the ERP through a face-like evaluation task using natural faces, cars, insects, and Arcimboldo paintings presented upright or inverted. We found a significant correlation between the inversion effect index and face-like scores in P1 in both hemispheres and in N170 in the right hemisphere. These results suggest that judgment of face-likeness occurs in a relatively early stage of face processing. PMID:29503612
The Treatment of Cancer through Hypnosis.
ERIC Educational Resources Information Center
Goldberg, Bruce
1985-01-01
This report traces the immunological components of the cancer process and illustrates how vital a role is played by stress. The work of the Simontons is used to discuss the relationship between stress, the immune system and cancer. Hypnotic visualization techniques and their effects on the immune system are also reviewed. (Author)
The Haptic Paradigm in Education: Challenges and Case Studies
ERIC Educational Resources Information Center
Hamza-Lup, Felix G.; Stanescu, Ioana A.
2010-01-01
The process of learning involves interaction with the learning environment through our five senses (sight, hearing, touch, smell, and taste). Until recently, distance education focused only on the first two of those senses, sight and sound. Internet-based learning environments are predominantly visual with auditory components. With the advent of…
Evaluation of an attributive measurement system in the automotive industry
NASA Astrophysics Data System (ADS)
Simion, C.
2016-08-01
Measurement System Analysis (MSA) is a critical component for any quality improvement process. MSA is defined as an experimental and mathematical method of determining how much the variation within the measurement process contributes to overall process variability and it falls into two categories: attribute and variable. Most problematic measurement system issues come from measuring attribute data, which are usually the result of human judgment (visual inspection). Because attributive measurement systems are often used in some manufacturing processes, their assessment is important to obtain the confidence in the inspection process, to see where are the problems in order to eliminate them and to guide the process improvement. It was the aim of this paper to address such a issue presenting a case study made in a local company from the Sibiu region supplying products for the automotive industry, specifically the bag (a technical textile component, i.e. the fabric) for the airbag module. Because defects are inherent in every manufacturing process and in the field of airbag systems a minor defect can influence their performance and lives depend on the safety feature, there is a stringent visual inspection required on the defects of the bag material. The purpose of this attribute MSA was: to determine if all inspectors use the same criteria to determine “pass” from “fail” product (i.e. the fabric); to assess company inspection standards against customer's requirements; to determine how well inspectors are conforming to themselves; to identify how inspectors are conforming to a “known master,” which includes: how often operators ship defective product, how often operators dispose of acceptable product; to discover areas where training is required, procedures must be developed and standards are not available. The results were analyzed using MINITAB software with its module called Attribute Agreement Analysis. The conclusion was that the inspection process must be improved by operator training, developing visual aids/boundary samples, establishing standards and set-up procedures.
Towards Infusing Giovanni with a Semantic and Provenance Aware Visualization System
NASA Astrophysics Data System (ADS)
Del Rio, N.; Pinheiro da Silva, P.; Leptoukh, G. G.; Lynnes, C.
2011-12-01
Giovanni is a Web-based application developed by GES DISC that provides simple and intuitive ways to visualize, analyze, and access vast amounts of Earth science remote sensed data. Currently, the Giovanni visualization module is only aware of the physical links (i.e., hard-coded) between data and services and consequently cannot be easily adapted to new visualization scenarios. VisKo, a semantically enabled visualization framework, can be leveraged by Giovanni as a semantic bridge between data and visualization. VisKo relates data and visualization services at conceptual (i.e., ontological) levels and relies on reasoning systems to leverage the conceptual relationships to automatically infer physical links, facilitating an adaptable environment for new visualization scenarios. This is particularly useful for Giovanni, which has been constantly retrofitted with new visualization software packages to keep up with advancement in visualization capabilities. During our prototype integration of Giovanni with VisKo, a number of future steps were identified that if implemented could cement the integration and promote our prototype to operational status. A number of integration issues arose including the mediation of different languages used by each system to characterize datasets; VisKo relies on semantic data characterization to "match-up" data with visualization processes. It was necessary to identify mappings between Giovanni XML provenance and Proof Markup Language, which is understood by VisKo. Although a translator was implemented based on identified mappings, a more elegant solution is to develop a domain data ontology specific to Giovanni and to "align" this ontology with PML, enabling VisKo to directly ingest the semantic descriptions of Giovanni data. Additionally, the relationship between dataset components (e.g., variables and attributes) and visualization plot components (e.g., geometries, axes, titles) should also be modeled. In Giovanni, meta-data descriptions are used to configure the different properties of the plots such as titles, color-tables, and variable-to-axis bindings. Giovanni services rely on a set of custom attributes and naming conventions that help identify the relationships between dataset components and plot properties. VisKo visualization services however are generic modules that do not rely on any domain specific conventions for identifying relationships between dataset attributes and plot configuration. Rather, VisKo services rely on parameters to configure specific behaviors of the generic services. The relationship between VisKo parameters and plot properties however has yet to formally documented, partly because VisKo regards plots as holistic entities without any internal structure from which to relate parameters. We understand the need for a visualization plot ontology that defines plot components, their retinal properties, such as position and color, and the relationship between the plot properties to controlling service parameter sets. The plot ontology would also be linked to our domain data ontology, providing VisKo with the comprehensive understanding about how data attributes can cue the configuration of plots, and how a specific plot configuration relates to service parameters.
NASA Astrophysics Data System (ADS)
Hagan, Catherine Beverly Anne Rodin
This thesis explores the fundamental experiential connections between visual art and science. The author links the academic text to multiple visual essays in an exploration of the holistic teaching and learning methodologies in visual art pedagogy. An examination of the educational philosophies of Dewey and Steiner supports the focus on experience as education. The philosophies of Eisner, Goudy, Dewey, London, Arnheim and Wallschlaeger provide a foundation for the overall structure of the separate concepts of visual art. The inclusion of visual essays and the hermeneutical mode in which they are sequenced follows the theories and examples of Sontag, Barthes, Berger, Silvers and Wilde. Creativity studies conducted by Csikszentmihalyi and the theories of invention in science investigated by Kuhn and Judson connect the creative process in visual art to science. Sheen, Feynman, D'Arcy Thompson and Capra offer a perspective on the learning, teaching and evaluation methodologies and philosophy in the area of science, particularly physics, botany and geometry. The examination of these theories as background, combined with narrative experiences and the exploratory visual component, draw out conclusions and implications about the untapped potential of visual art in education.
Audiovisual integration for speech during mid-childhood: electrophysiological evidence.
Kaganovich, Natalya; Schumaker, Jennifer
2014-12-01
Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7-8-year-olds and 10-11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception. Copyright © 2014 Elsevier Inc. All rights reserved.
Audiovisual integration for speech during mid-childhood: Electrophysiological evidence
Kaganovich, Natalya; Schumaker, Jennifer
2014-01-01
Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7–8-year-olds and 10–11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception. PMID:25463815
Star formation history: Modeling of visual binaries
NASA Astrophysics Data System (ADS)
Gebrehiwot, Y. M.; Tessema, S. B.; Malkov, O. Yu.; Kovaleva, D. A.; Sytov, A. Yu.; Tutukov, A. V.
2018-05-01
Most stars form in binary or multiple systems. Their evolution is defined by masses of components, orbital separation and eccentricity. In order to understand star formation and evolutionary processes, it is vital to find distributions of physical parameters of binaries. We have carried out Monte Carlo simulations in which we simulate different pairing scenarios: random pairing, primary-constrained pairing, split-core pairing, and total and primary pairing in order to get distributions of binaries over physical parameters at birth. Next, for comparison with observations, we account for stellar evolution and selection effects. Brightness, radius, temperature, and other parameters of components are assigned or calculated according to approximate relations for stars in different evolutionary stages (main-sequence stars, red giants, white dwarfs, relativistic objects). Evolutionary stage is defined as a function of system age and component masses. We compare our results with the observed IMF, binarity rate, and binary mass-ratio distributions for field visual binaries to find initial distributions and pairing scenarios that produce observed distributions.
Figure-ground activity in V1 and guidance of saccadic eye movements.
Supèr, Hans
2006-01-01
Every day we shift our gaze about 150.000 times mostly without noticing it. The direction of these gaze shifts are not random but directed by sensory information and internal factors. After each movement the eyes hold still for a brief moment so that visual information at the center of our gaze can be processed in detail. This means that visual information at the saccade target location is sufficient to accurately guide the gaze shift but yet is not sufficiently processed to be fully perceived. In this paper I will discuss the possible role of activity in the primary visual cortex (V1), in particular figure-ground activity, in oculo-motor behavior. Figure-ground activity occurs during the late response period of V1 neurons and correlates with perception. The strength of figure-ground responses predicts the direction and moment of saccadic eye movements. The superior colliculus, a gaze control center that integrates visual and motor signals, receives direct anatomical connections from V1. These projections may convey the perceptual information that is required for appropriate gaze shifts. In conclusion, figure-ground activity in V1 may act as an intermediate component linking visual and motor signals.
The taste-visual cross-modal Stroop effect: An event-related brain potential study.
Xiao, X; Dupuis-Roy, N; Yang, X L; Qiu, J F; Zhang, Q L
2014-03-28
Event-related potentials (ERPs) were recorded to explore, for the first time, the electrophysiological correlates of the taste-visual cross-modal Stroop effect. Eighteen healthy participants were presented with a taste stimulus and a food image, and asked to categorize the image as "sweet" or "sour" by pressing the relevant button as quickly as possible. Accurate categorization of the image was faster when it was presented with a congruent taste stimulus (e.g., sour taste/image of lemon) than with an incongruent one (e.g., sour taste/image of ice cream). ERP analyses revealed a negative difference component (ND430-620) between 430 and 620ms in the taste-visual cross-modal Stroop interference. Dipole source analysis of the difference wave (incongruent minus congruent) indicated that two generators localized in the prefrontal cortex and the parahippocampal gyrus contributed to this taste-visual cross-modal Stroop effect. This result suggests that the prefrontal cortex is associated with the process of conflict control in the taste-visual cross-modal Stroop effect. Also, we speculate that the parahippocampal gyrus is associated with the process of discordant information in the taste-visual cross-modal Stroop effect. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
Attention modulates specific motor cortical circuits recruited by transcranial magnetic stimulation.
Mirdamadi, J L; Suzuki, L Y; Meehan, S K
2017-09-17
Skilled performance and acquisition is dependent upon afferent input to motor cortex. The present study used short-latency afferent inhibition (SAI) to probe how manipulation of sensory afference by attention affects different circuits projecting to pyramidal tract neurons in motor cortex. SAI was assessed in the first dorsal interosseous muscle while participants performed a low or high attention-demanding visual detection task. SAI was evoked by preceding a suprathreshold transcranial magnetic stimulus with electrical stimulation of the median nerve at the wrist. To isolate different afferent intracortical circuits in motor cortex SAI was evoked using either posterior-anterior (PA) or anterior-posterior (PA) monophasic current. In an independent sample, somatosensory processing during the same attention-demanding visual detection tasks was assessed using somatosensory-evoked potentials (SEP) elicited by median nerve stimulation. SAI elicited by AP TMS was reduced under high compared to low visual attention demands. SAI elicited by PA TMS was not affected by visual attention demands. SEPs revealed that the high visual attention load reduced the fronto-central P20-N30 but not the contralateral parietal N20-P25 SEP component. P20-N30 reduction confirmed that the visual attention task altered sensory afference. The current results offer further support that PA and AP TMS recruit different neuronal circuits. AP circuits may be one substrate by which cognitive strategies shape sensorimotor processing during skilled movement by altering sensory processing in premotor areas. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
Vibrational Analysis of Engine Components Using Neural-Net Processing and Electronic Holography
NASA Technical Reports Server (NTRS)
Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.
1997-01-01
The use of computational-model trained artificial neural networks to acquire damage specific information from electronic holograms is discussed. A neural network is trained to transform two time-average holograms into a pattern related to the bending-induced-strain distribution of the vibrating component. The bending distribution is very sensitive to component damage unlike the characteristic fringe pattern or the displacement amplitude distribution. The neural network processor is fast for real-time visualization of damage. The two-hologram limit makes the processor more robust to speckle pattern decorrelation. Undamaged and cracked cantilever plates serve as effective objects for testing the combination of electronic holography and neural-net processing. The requirements are discussed for using finite-element-model trained neural networks for field inspections of engine components. The paper specifically discusses neural-network fringe pattern analysis in the presence of the laser speckle effect and the performances of two limiting cases of the neural-net architecture.
Vibrational Analysis of Engine Components Using Neural-Net Processing and Electronic Holography
NASA Technical Reports Server (NTRS)
Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.
1998-01-01
The use of computational-model trained artificial neural networks to acquire damage specific information from electronic holograms is discussed. A neural network is trained to transform two time-average holograms into a pattern related to the bending-induced-strain distribution of the vibrating component. The bending distribution is very sensitive to component damage unlike the characteristic fringe pattern or the displacement amplitude distribution. The neural network processor is fast for real-time visualization of damage. The two-hologram limit makes the processor more robust to speckle pattern decorrelation. Undamaged and cracked cantilever plates serve as effective objects for testing the combination of electronic holography and neural-net processing. The requirements are discussed for using finite-element-model trained neural networks for field inspections of engine components. The paper specifically discusses neural-network fringe pattern analysis in the presence of the laser speckle effect and the performances of two limiting cases of the neural-net architecture.
Simultaneous chromatic and luminance human electroretinogram responses.
Parry, Neil R A; Murray, Ian J; Panorgias, Athanasios; McKeefry, Declan J; Lee, Barry B; Kremers, Jan
2012-07-01
The parallel processing of information forms an important organisational principle of the primate visual system. Here we describe experiments which use a novel chromatic–achromatic temporal compound stimulus to simultaneously identify colour and luminance specific signals in the human electroretinogram (ERG). Luminance and chromatic components are separated in the stimulus; the luminance modulation has twice the temporal frequency of the chromatic modulation. ERGs were recorded from four trichromatic and two dichromatic subjects (1 deuteranope and 1 protanope). At isoluminance, the fundamental (first harmonic) response was elicited by the chromatic component in the stimulus. The trichromatic ERGs possessed low-pass temporal tuning characteristics, reflecting the activity of parvocellular post-receptoral mechanisms. There was very little first harmonic response in the dichromats' ERGs. The second harmonic response was elicited by the luminance modulation in the compound stimulus and showed, in all subjects, band-pass temporal tuning characteristic of magnocellular activity. Thus it is possible to concurrently elicit ERG responses from the human retina which reflect processing in both chromatic and luminance pathways. As well as providing a clear demonstration of the parallel nature of chromatic and luminance processing in the human retina, the differences that exist between ERGs from trichromatic and dichromatic subjects point to the existence of interactions between afferent post-receptoral pathways that are in operation from the earliest stages of visual processing.
Design of smart home sensor visualizations for older adults.
Le, Thai; Reeder, Blaine; Chung, Jane; Thompson, Hilaire; Demiris, George
2014-01-01
Smart home sensor systems provide a valuable opportunity to continuously and unobtrusively monitor older adult wellness. However, the density of sensor data can be challenging to visualize, especially for an older adult consumer with distinct user needs. We describe the design of sensor visualizations informed by interviews with older adults. The goal of the visualizations is to present sensor activity data to an older adult consumer audience that supports both longitudinal detection of trends and on-demand display of activity details for any chosen day. The design process is grounded through participatory design with older adult interviews during a six-month pilot sensor study. Through a secondary analysis of interviews, we identified the visualization needs of older adults. We incorporated these needs with cognitive perceptual visualization guidelines and the emotional design principles of Norman to develop sensor visualizations. We present a design of sensor visualization that integrate both temporal and spatial components of information. The visualization supports longitudinal detection of trends while allowing the viewer to view activity within a specific date. Appropriately designed visualizations for older adults not only provide insight into health and wellness, but also are a valuable resource to promote engagement within care.
Design of smart home sensor visualizations for older adults.
Le, Thai; Reeder, Blaine; Chung, Jane; Thompson, Hilaire; Demiris, George
2014-07-24
Smart home sensor systems provide a valuable opportunity to continuously and unobtrusively monitor older adult wellness. However, the density of sensor data can be challenging to visualize, especially for an older adult consumer with distinct user needs. We describe the design of sensor visualizations informed by interviews with older adults. The goal of the visualizations is to present sensor activity data to an older adult consumer audience that supports both longitudinal detection of trends and on-demand display of activity details for any chosen day. The design process is grounded through participatory design with older adult interviews during a six-month pilot sensor study. Through a secondary analysis of interviews, we identified the visualization needs of older adults. We incorporated these needs with cognitive perceptual visualization guidelines and the emotional design principles of Norman to develop sensor visualizations. We present a design of sensor visualization that integrate both temporal and spatial components of information. The visualization supports longitudinal detection of trends while allowing the viewer to view activity within a specific date.CONCLUSIONS: Appropriately designed visualizations for older adults not only provide insight into health and wellness, but also are a valuable resource to promote engagement within care.
Kiefer, Markus; Ansorge, Ulrich; Haynes, John-Dylan; Hamker, Fred; Mattler, Uwe; Verleger, Rolf; Niedeggen, Michael
2011-01-01
Psychological and neuroscience approaches have promoted much progress in elucidating the cognitive and neural mechanisms that underlie phenomenal visual awareness during the last decades. In this article, we provide an overview of the latest research investigating important phenomena in conscious and unconscious vision. We identify general principles to characterize conscious and unconscious visual perception, which may serve as important building blocks for a unified model to explain the plethora of findings. We argue that in particular the integration of principles from both conscious and unconscious vision is advantageous and provides critical constraints for developing adequate theoretical models. Based on the principles identified in our review, we outline essential components of a unified model of conscious and unconscious visual perception. We propose that awareness refers to consolidated visual representations, which are accessible to the entire brain and therefore globally available. However, visual awareness not only depends on consolidation within the visual system, but is additionally the result of a post-sensory gating process, which is mediated by higher-level cognitive control mechanisms. We further propose that amplification of visual representations by attentional sensitization is not exclusive to the domain of conscious perception, but also applies to visual stimuli, which remain unconscious. Conscious and unconscious processing modes are highly interdependent with influences in both directions. We therefore argue that exactly this interdependence renders a unified model of conscious and unconscious visual perception valuable. Computational modeling jointly with focused experimental research could lead to a better understanding of the plethora of empirical phenomena in consciousness research. PMID:22253669
Realistic tissue visualization using photoacoustic image
NASA Astrophysics Data System (ADS)
Cho, Seonghee; Managuli, Ravi; Jeon, Seungwan; Kim, Jeesu; Kim, Chulhong
2018-02-01
Visualization methods are very important in biomedical imaging. As a technology that understands life, biomedical imaging has the unique advantage of providing the most intuitive information in the image. This advantage of biomedical imaging can be greatly improved by choosing a special visualization method. This is more complicated in volumetric data. Volume data has the advantage of containing 3D spatial information. Unfortunately, the data itself cannot directly represent the potential value. Because images are always displayed in 2D space, visualization is the key and creates the real value of volume data. However, image processing of 3D data requires complicated algorithms for visualization and high computational burden. Therefore, specialized algorithms and computing optimization are important issues in volume data. Photoacoustic-imaging is a unique imaging modality that can visualize the optical properties of deep tissue. Because the color of the organism is mainly determined by its light absorbing component, photoacoustic data can provide color information of tissue, which is closer to real tissue color. In this research, we developed realistic tissue visualization using acoustic-resolution photoacoustic volume data. To achieve realistic visualization, we designed specialized color transfer function, which depends on the depth of the tissue from the skin. We used direct ray casting method and processed color during computing shader parameter. In the rendering results, we succeeded in obtaining similar texture results from photoacoustic data. The surface reflected rays were visualized in white, and the reflected color from the deep tissue was visualized red like skin tissue. We also implemented the CUDA algorithm in an OpenGL environment for real-time interactive imaging.
Architectures for single-chip image computing
NASA Astrophysics Data System (ADS)
Gove, Robert J.
1992-04-01
This paper will focus on the architectures of VLSI programmable processing components for image computing applications. TI, the maker of industry-leading RISC, DSP, and graphics components, has developed an architecture for a new-generation of image processors capable of implementing a plurality of image, graphics, video, and audio computing functions. We will show that the use of a single-chip heterogeneous MIMD parallel architecture best suits this class of processors--those which will dominate the desktop multimedia, document imaging, computer graphics, and visualization systems of this decade.
A Neuro-Oncology Workstation for Structuring, Modeling, and Visualizing Patient Records
Hsu, William; Arnold, Corey W.; Taira, Ricky K.
2016-01-01
The patient medical record contains a wealth of information consisting of prior observations, interpretations, and interventions that need to be interpreted and applied towards decisions regarding current patient care. Given the time constraints and the large—often extraneous—amount of data available, clinicians are tasked with the challenge of performing a comprehensive review of how a disease progresses in individual patients. To facilitate this process, we demonstrate a neuro-oncology workstation that assists in structuring and visualizing medical data to promote an evidence-based approach for understanding a patient’s record. The workstation consists of three components: 1) a structuring tool that incorporates natural language processing to assist with the extraction of problems, findings, and attributes for structuring observations, events, and inferences stated within medical reports; 2) a data modeling tool that provides a comprehensive and consistent representation of concepts for the disease-specific domain; and 3) a visual workbench for visualizing, navigating, and querying the structured data to enable retrieval of relevant portions of the patient record. We discuss this workstation in the context of reviewing cases of glioblastoma multiforme patients. PMID:27583308
A Neuro-Oncology Workstation for Structuring, Modeling, and Visualizing Patient Records.
Hsu, William; Arnold, Corey W; Taira, Ricky K
2010-11-01
The patient medical record contains a wealth of information consisting of prior observations, interpretations, and interventions that need to be interpreted and applied towards decisions regarding current patient care. Given the time constraints and the large-often extraneous-amount of data available, clinicians are tasked with the challenge of performing a comprehensive review of how a disease progresses in individual patients. To facilitate this process, we demonstrate a neuro-oncology workstation that assists in structuring and visualizing medical data to promote an evidence-based approach for understanding a patient's record. The workstation consists of three components: 1) a structuring tool that incorporates natural language processing to assist with the extraction of problems, findings, and attributes for structuring observations, events, and inferences stated within medical reports; 2) a data modeling tool that provides a comprehensive and consistent representation of concepts for the disease-specific domain; and 3) a visual workbench for visualizing, navigating, and querying the structured data to enable retrieval of relevant portions of the patient record. We discuss this workstation in the context of reviewing cases of glioblastoma multiforme patients.
Single-Trial Analysis of V1 Responses Suggests Two Transmission States
NASA Technical Reports Server (NTRS)
Shah, A. S.; Knuth, K. H.; Truccolo, W. A.; Mehta, A. D.; McGinnis, T.; OConnell, N.; Ding, M.; Bressler, S. L.; Schroeder, C. E.
2002-01-01
Sensory processing in the visual, auditory, and somatosensory systems is often studied by recording electrical activity in response to a stimulus of interest. Typically, multiple trial responses to the stimulus are averaged to isolate the stereotypic response from noise. However, averaging ignores dynamic variability in the neuronal response, which is potentially critical to understanding stimulus-processing schemes. Thus, we developed the multiple component, Event-Related Potential (mcERP) model. This model asserts that multiple components, defined as stereotypic waveforms, comprise the stimulus-evoked response and that these components may vary in amplitude and latency from trial to trial. Application of this model to data recorded simultaneously from all six laminae of V1 in an awake, behaving monkey performing a visual discrimination yielded three components. The first component localized to granular V1, the second was located in supragranular V1, and the final component displayed a multi-laminar distribution. These modeling results, which take into account single-trial response dynamics, illustrated that the initial activation of VI occurs in the granular layer followed by activation in the supragranular layers. This finding is expected because the average response in those layers demonstrates the same progression and because anatomical evidence suggests that the feedforward input in V1 enters the granular layer and progresses to supragranular layers. In addition to these findings, the granular component of the model displayed several interesting trial-to-trial characteristics including (1) a bimodal latency distribution, (2) a latency-related variation in response amplitude, (3) a latency correlation with the supragranular component, and (4) an amplitude and latency association with the multi-laminar component. Direct analyses of the single-trial data were consistent with these model predictions. These findings suggest that V1 has at least 2 transmission states, which may be modulated by various effects such as attention, dynamics in local EEG rhythm, or variation in sensory inputs.
Dissociation between the neural correlates of conscious face perception and visual attention.
Navajas, Joaquin; Nitka, Aleksander W; Quian Quiroga, Rodrigo
2017-08-01
Given the higher chance to recognize attended compared to unattended stimuli, the specific neural correlates of these two processes, attention and awareness, tend to be intermingled in experimental designs. In this study, we dissociated the neural correlates of conscious face perception from the effects of visual attention. To do this, we presented faces at the threshold of awareness and manipulated attention through the use of exogenous prestimulus cues. We show that the N170 component, a scalp EEG marker of face perception, was modulated independently by attention and by awareness. An earlier P1 component was not modulated by either of the two effects and a later P3 component was indicative of awareness but not of attention. These claims are supported by converging evidence from (a) modulations observed in the average evoked potentials, (b) correlations between neural and behavioral data at the single-subject level, and (c) single-trial analyses. Overall, our results show a clear dissociation between the neural substrates of attention and awareness. Based on these results, we argue that conscious face perception is triggered by a boost in face-selective cortical ensembles that can be modulated by, but are still independent from, visual attention. © 2017 Society for Psychophysiological Research.
Schroeder, C E; Mehta, A D; Givre, S J
1998-01-01
We investigated the spatiotemporal activation pattern, produced by one visual stimulus, across cerebral cortical regions in awake monkeys. Laminar profiles of postsynaptic potentials and action potentials were indexed with current source density (CSD) and multiunit activity profiles respectively. Locally, we found contrasting activation profiles in dorsal and ventral stream areas. The former, like V1 and V2, exhibit a 'feedforward' profile, with excitation beginning at the depth of Lamina 4, followed by activation of the extragranular laminae. The latter often displayed a multilaminar/columnar profile, with initial responses distributed across the laminae and reflecting modulation rather than excitation; CSD components were accompanied by either no changes or by suppression of action potentials. System-wide, response latencies indicated a large dorsal/ventral stream latency advantage, which generalizes across a wide range of methods. This predicts a specific temporal ordering of dorsal and ventral stream components of visual analysis, as well as specific patterns of dorsal-ventral stream interaction. Our findings support a hierarchical model of cortical organization that combines serial and parallel elements. Critical in such a model is the recognition that processing within a location typically entails multiple temporal components or 'waves' of activity, driven by input conveyed over heterogeneous pathways from the retina.
Aiello, Marilena; Merola, Sheila; Lasaponara, Stefano; Pinto, Mario; Tomaiuolo, Francesco; Doricchi, Fabrizio
2018-01-31
The possibility of allocating attentional resources to the "global" shape or to the "local" details of pictorial stimuli helps visual processing. Investigations with hierarchical Navon letters, that are large "global" letters made up of small "local" ones, consistently demonstrate a right hemisphere advantage for global processing and a left hemisphere advantage for local processing. Here we investigated how the visual and phonological features of the global and local components of Navon letters influence these hemispheric advantages. In a first study in healthy participants, we contrasted the hemispheric processing of hierarchical letters with global and local items competing for response selection, to the processing of hierarchical letters in which a letter, a false-letter conveying no phonological information or a geometrical shape presented at the unattended level did not compete for response selection. In a second study, we investigated the hemispheric processing of hierarchical stimuli in which global and local letters were both visually and phonologically congruent (e.g. large uppercase G made of smaller uppercase G), visually incongruent and phonologically congruent (e.g. large uppercase G made of small lowercase g) or visually incongruent and phonologically incongruent (e.g. large uppercase G made of small lowercase or uppercase M). In a third study, we administered the same tasks to a right brain damaged patient with a lesion involving pre-striate areas engaged by global processing. The results of the first two experiments showed that the global abilities of the left hemisphere are limited because of its strong susceptibility to interference from local letters even when these are irrelevant to the task. Phonological features played a crucial role in this interference because the interference was entirely maintained also when letters at the global and local level were presented in different uppercase vs. lowercase formats. In contrast, when local features conveyed no phonological information, the left hemisphere showed preserved global processing abilities. These findings were supported by the study of the right brain damaged patient. These results offer a new look at the hemispheric dominance in the attentional processing of the global and local levels of hierarchical stimuli. Copyright © 2017 Elsevier Ltd. All rights reserved.
2012-01-01
Background There is at present crescent empirical evidence deriving from different lines of ERPs research that, unlike previously observed, the earliest sensory visual response, known as C1 component or P/N80, generated within the striate cortex, might be modulated by selective attention to visual stimulus features. Up to now, evidence of this modulation has been related to space location, and simple features such as spatial frequency, luminance, and texture. Additionally, neurophysiological conditions, such as emotion, vigilance, the reflexive or voluntary nature of input attentional selection, and workload have also been related to C1 modulations, although at least the workload status has received controversial indications. No information is instead available, at present, for objects attentional selection. Methods In this study object- and space-based attention mechanisms were conjointly investigated by presenting complex, familiar shapes of artefacts and animals, intermixed with distracters, in different tasks requiring the selection of a relevant target-category within a relevant spatial location, while ignoring the other shape categories within this location, and, overall, all the categories at an irrelevant location. EEG was recorded from 30 scalp electrode sites in 21 right-handed participants. Results and Conclusions ERP findings showed that visual processing was modulated by both shape- and location-relevance per se, beginning separately at the latency of the early phase of a precocious negativity (60-80 ms) at mesial scalp sites consistent with the C1 component, and a positivity at more lateral sites. The data also showed that the attentional modulation progressed conjointly at the latency of the subsequent P1 (100-120 ms) and N1 (120-180 ms), as well as later-latency components. These findings support the views that (1) V1 may be precociously modulated by direct top-down influences, and participates to object, besides simple features, attentional selection; (2) object spatial and non-spatial features selection might begin with an early, parallel detection of a target object in the visual field, followed by the progressive focusing of spatial attention onto the location of an actual target for its identification, somehow in line with neural mechanisms reported in the literature as "object-based space selection", or with those proposed for visual search. PMID:22300540
Basic instinct undressed: early spatiotemporal processing for primary sexual characteristics.
Legrand, Lore B; Del Zotto, Marzia; Tyrand, Rémi; Pegna, Alan J
2013-01-01
This study investigates the spatiotemporal dynamics associated with conscious and non-conscious processing of naked and dressed human bodies. To this effect, stimuli of naked men and women with visible primary sexual characteristics, as well as dressed bodies, were presented to 20 heterosexual male and female participants while acquiring high resolution EEG data. The stimuli were either consciously detectable (supraliminal presentations) or were rendered non-conscious through backward masking (subliminal presentations). The N1 event-related potential component was significantly enhanced in participants when they viewed naked compared to dressed bodies under supraliminal viewing conditions. More importantly, naked bodies of the opposite sex produced a significantly greater N1 component compared to dressed bodies during subliminal presentations, when participants were not aware of the stimulus presented. A source localization algorithm computed on the N1 showed that the response for naked bodies in the supraliminal viewing condition was stronger in body processing areas, primary visual areas and additional structures related to emotion processing. By contrast, in the subliminal viewing condition, only visual and body processing areas were found to be activated. These results suggest that naked bodies and primary sexual characteristics are processed early in time (i.e., <200 ms) and activate key brain structures even when they are not consciously detected. It appears that, similarly to what has been reported for emotional faces, sexual features benefit from automatic and rapid processing, most likely due to their high relevance for the individual and their importance for the species in terms of reproductive success.
Grubert, Anna; Eimer, Martin
2015-11-11
During the maintenance of task-relevant objects in visual working memory, the contralateral delay activity (CDA) is elicited over the hemisphere opposite to the visual field where these objects are presented. The presence of this lateralised CDA component demonstrates the existence of position-dependent object representations in working memory. We employed a change detection task to investigate whether the represented object locations in visual working memory are shifted in preparation for the known location of upcoming comparison stimuli. On each trial, bilateral memory displays were followed after a delay period by bilateral test displays. Participants had to encode and maintain three visual objects on one side of the memory display, and to judge whether they were identical or different to three objects in the test display. Task-relevant memory and test stimuli were located in the same visual hemifield in the no-shift task, and on opposite sides in the horizontal shift task. CDA components of similar size were triggered contralateral to the memorized objects in both tasks. The absence of a polarity reversal of the CDA in the horizontal shift task demonstrated that there was no preparatory shift of memorized object location towards the side of the upcoming comparison stimuli. These results suggest that visual working memory represents the locations of visual objects during encoding, and that the matching of memorized and test objects at different locations is based on a comparison process that can bridge spatial translations between these objects. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2014 Elsevier B.V. All rights reserved.
Pearman, John K; Anlauf, Holger; Irigoien, Xabier; Carvalho, Susana
2016-07-01
Coral reefs harbor the most diverse assemblages in the ocean, however, a large proportion of the diversity is cryptic and, therefore, undetected by standard visual census techniques. Cryptic and exposed communities differ considerably in species composition and ecological function. This study compares three different coral reef assessment protocols: i) visual benthic reef surveys: ii) visual census of Autonomous Reef Monitoring Structures (ARMS) plates; and iii) metabarcoding techniques of the ARMS (including sessile, 106-500 μm and 500-2000 μm size fractions), that target the cryptic and exposed communities of three reefs in the central Red Sea. Visual census showed a dominance of Cnidaria (Anthozoa) and Rhodophyta on the reef substrate, while Porifera, Bryozoa and Rhodophyta were the most abundant groups on the ARMS plates. Metabarcoding, targeting the 18S rRNA gene, significantly increased estimates of the species diversity (p < 0.001); revealing that Annelida were generally the dominant phyla (in terms of reads) of all fractions and reefs. Furthermore, metabarcoding detected microbial eukaryotic groups such as Syndiniophyceae, Mamiellophyceae and Bacillariophyceae as relevant components of the sessile fraction. ANOSIM analysis showed that the three reef sites showed no differences based on the visual census data. Metabarcoding showed a higher sensitivity for identifying differences between reef communities at smaller geographic scales than standard visual census techniques as significant differences in the assemblages were observed amongst the reefs. Comparison of the techniques showed no similar patterns for the visual techniques while the metabarcoding of the ARMS showed similar patterns amongst fractions. Establishing ARMS as a standard tool in reef monitoring will not only advance our understanding of local processes and ecological community response to environmental changes, as different faunal components will provide complementary information but also improve the estimates of biodiversity in coral reef benthic communities. This study lays the foundations for further studies looking at integrating traditional reef survey methodologies with complementary approaches, such as metabarcoding, which investigate other components of the reef community. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Lighten the Load: Scaffolding Visual Literacy in Biochemistry and Molecular Biology.
Offerdahl, Erika G; Arneson, Jessie B; Byrne, Nicholas
2017-01-01
The development of scientific visual literacy has been identified as critical to the training of tomorrow's scientists and citizens alike. Within the context of the molecular life sciences in particular, visual representations frequently incorporate various components, such as discipline-specific graphical and diagrammatic features, varied levels of abstraction, and spatial arrangements of visual elements to convey information. Visual literacy is achieved when an individual understands the various ways in which a discipline uses these components to represent a particular way of knowing. Owing to the complex nature of visual representations, the activities through which visual literacy is developed have high cognitive load. Cognitive load can be reduced by first helping students to become fluent with the discrete components of visual representations before asking them to simultaneously integrate these components to extract the intended meaning of a representation. We present a taxonomy for characterizing one component of visual representations-the level of abstraction-as a first step in understanding the opportunities afforded students to develop fluency. Further, we demonstrate how our taxonomy can be used to analyze course assessments and spur discussions regarding the extent to which the development of visual literacy skills is supported by instruction within an undergraduate biochemistry curriculum. © 2017 E. G. Offerdahl et al. CBE—Life Sciences Education © 2017 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Mangun, G R; Buck, L A
1998-03-01
This study investigated the simple reaction time (RT) and event-related potential (ERP) correlates of biasing attention towards a location in the visual field. RTs and ERPs were recorded to stimuli flashed randomly and with equal probability to the left and right visual hemifields in the three blocked, covert attention conditions: (i) attention divided equally to left and right hemifield locations; (ii) attention biased towards the left location; or (iii) attention biased towards the right location. Attention was biased towards left or right by instructions to the subjects, and responses were required to all stimuli. Relative to the divided attention condition, RTs were significantly faster for targets occurring where more attention was allocated (benefits), and slower to targets where less attention was allocated (costs). The early P1 (100-140 msec) component over the lateral occipital scalp regions showed attentional benefits. There were no amplitude modulations of the occipital N1 (125-180 msec) component with attention. Between 200 and 500 msec latency, a late positive deflection (LPD) showed both attentional costs and benefits. The behavioral findings show that when sufficiently induced to bias attention, human observers demonstrate RT benefits as well as costs. The corresponding P1 benefits suggest that the RT benefits of spatial attention may arise as the result of modulations of visual information processing in the extrastriate visual cortex.
DeviceEditor visual biological CAD canvas
2012-01-01
Background Biological Computer Aided Design (bioCAD) assists the de novo design and selection of existing genetic components to achieve a desired biological activity, as part of an integrated design-build-test cycle. To meet the emerging needs of Synthetic Biology, bioCAD tools must address the increasing prevalence of combinatorial library design, design rule specification, and scar-less multi-part DNA assembly. Results We report the development and deployment of web-based bioCAD software, DeviceEditor, which provides a graphical design environment that mimics the intuitive visual whiteboard design process practiced in biological laboratories. The key innovations of DeviceEditor include visual combinatorial library design, direct integration with scar-less multi-part DNA assembly design automation, and a graphical user interface for the creation and modification of design specification rules. We demonstrate how biological designs are rendered on the DeviceEditor canvas, and we present effective visualizations of genetic component ordering and combinatorial variations within complex designs. Conclusions DeviceEditor liberates researchers from DNA base-pair manipulation, and enables users to create successful prototypes using standardized, functional, and visual abstractions. Open and documented software interfaces support further integration of DeviceEditor with other bioCAD tools and software platforms. DeviceEditor saves researcher time and institutional resources through correct-by-construction design, the automation of tedious tasks, design reuse, and the minimization of DNA assembly costs. PMID:22373390
Distributed visualization framework architecture
NASA Astrophysics Data System (ADS)
Mishchenko, Oleg; Raman, Sundaresan; Crawfis, Roger
2010-01-01
An architecture for distributed and collaborative visualization is presented. The design goals of the system are to create a lightweight, easy to use and extensible framework for reasearch in scientific visualization. The system provides both single user and collaborative distributed environment. System architecture employs a client-server model. Visualization projects can be synchronously accessed and modified from different client machines. We present a set of visualization use cases that illustrate the flexibility of our system. The framework provides a rich set of reusable components for creating new applications. These components make heavy use of leading design patterns. All components are based on the functionality of a small set of interfaces. This allows new components to be integrated seamlessly with little to no effort. All user input and higher-level control functionality interface with proxy objects supporting a concrete implementation of these interfaces. These light-weight objects can be easily streamed across the web and even integrated with smart clients running on a user's cell phone. The back-end is supported by concrete implementations wherever needed (for instance for rendering). A middle-tier manages any communication and synchronization with the proxy objects. In addition to the data components, we have developed several first-class GUI components for visualization. These include a layer compositor editor, a programmable shader editor, a material editor and various drawable editors. These GUI components interact strictly with the interfaces. Access to the various entities in the system is provided by an AssetManager. The asset manager keeps track of all of the registered proxies and responds to queries on the overall system. This allows all user components to be populated automatically. Hence if a new component is added that supports the IMaterial interface, any instances of this can be used in the various GUI components that work with this interface. One of the main features is an interactive shader designer. This allows rapid prototyping of new visualization renderings that are shader-based and greatly accelerates the development and debug cycle.
In-process 3D assessment of micromoulding features
NASA Astrophysics Data System (ADS)
Whiteside, B. R.; Spares, R.; Coates, P. D.
2006-04-01
Micro injection moulding (micromoulding) technology has recently emerged as a viable manufacturing route for polymer, metal and ceramic components with micro-scale features and surface textures. With a cycle time for production of a single component of just a few seconds, the proces offers the capability for mass production of microscale devices at a low marginal cost. However, the extreme stresses, strain rates and temperature gradients characteristic of the process have the consequence that a slight fluctuation in material properties or moulding conditions can have a significant impact on the dimensional or structural properties of the resulting component and in-line process monitoring is highly desirable. This paper describes the development of an in-process, high speed 3-dimensional measurement system for evaluation of every component manufactured during the process. A high speed camera and microscope lens coupled with a linear stage are used to create a stack of images which are subsequently processed using extended depth of field techniques to form a virtual 3-dimensional contour of the component. This data can then be used to visually verify the quality of the moulding on-screen or standard machine vision algorithms can be employed to allow fully automated quality inspection and filtering of sub-standard products. Good results have been obtained for a range of materials and geometries and measurement accuracy has been verified through comparison with data obtained using a Wyko NT1100 white light interferometer.
Seismpol_ a visual-basic computer program for interactive and automatic earthquake waveform analysis
NASA Astrophysics Data System (ADS)
Patanè, Domenico; Ferrari, Ferruccio
1997-11-01
A Microsoft Visual-Basic computer program for waveform analysis of seismic signals is presented. The program combines interactive and automatic processing of digital signals using data recorded by three-component seismic stations. The analysis procedure can be used in either an interactive earthquake analysis or an automatic on-line processing of seismic recordings. The algorithm works in the time domain using the Covariance Matrix Decomposition method (CMD), so that polarization characteristics may be computed continuously in real time and seismic phases can be identified and discriminated. Visual inspection of the particle motion in hortogonal planes of projection (hodograms) reduces the danger of misinterpretation derived from the application of the polarization filter. The choice of time window and frequency intervals improves the quality of the extracted polarization information. In fact, the program uses a band-pass Butterworth filter to process the signals in the frequency domain by analysis of a selected signal window into a series of narrow frequency bands. Significant results supported by well defined polarizations and source azimuth estimates for P and S phases are also obtained for short-period seismic events (local microearthquakes).
Some Components of Geometric Knowledge of Future Elementary School Teachers
ERIC Educational Resources Information Center
Debrenti, Edith
2016-01-01
Geometric experience, spatial representation, spatial visualization, understanding the world around us, and developing the ability of spatial reasoning are fundamental aims in the teaching of mathematics. (Freudenthal, 1972) Learning is a process which involves advancing from level to level. In primary school the focus is on the first two levels…
ERIC Educational Resources Information Center
Bussey, Thomas J.; Orgill, MaryKay
2015-01-01
Biochemistry instructors often use external representations--ranging from static diagrams to dynamic animations and from simplistic, stylized illustrations to more complex, realistic presentations--to help their students visualize abstract cellular and molecular processes, mechanisms, and components. However, relatively little is known about how…
NASA Astrophysics Data System (ADS)
Christensen, C.; Liu, S.; Scorzelli, G.; Lee, J. W.; Bremer, P. T.; Summa, B.; Pascucci, V.
2017-12-01
The creation, distribution, analysis, and visualization of large spatiotemporal datasets is a growing challenge for the study of climate and weather phenomena in which increasingly massive domains are utilized to resolve finer features, resulting in datasets that are simply too large to be effectively shared. Existing workflows typically consist of pipelines of independent processes that preclude many possible optimizations. As data sizes increase, these pipelines are difficult or impossible to execute interactively and instead simply run as large offline batch processes. Rather than limiting our conceptualization of such systems to pipelines (or dataflows), we propose a new model for interactive data analysis and visualization systems in which we comprehensively consider the processes involved from data inception through analysis and visualization in order to describe systems composed of these processes in a manner that facilitates interactive implementations of the entire system rather than of only a particular component. We demonstrate the application of this new model with the implementation of an interactive system that supports progressive execution of arbitrary user scripts for the analysis and visualization of massive, disparately located climate data ensembles. It is currently in operation as part of the Earth System Grid Federation server running at Lawrence Livermore National Lab, and accessible through both web-based and desktop clients. Our system facilitates interactive analysis and visualization of massive remote datasets up to petabytes in size, such as the 3.5 PB 7km NASA GEOS-5 Nature Run simulation, previously only possible offline or at reduced resolution. To support the community, we have enabled general distribution of our application using public frameworks including Docker and Anaconda.
Modeling interdependencies between business and communication processes in hospitals.
Brigl, Birgit; Wendt, Thomas; Winter, Alfred
2003-01-01
The optimization and redesign of business processes in hospitals is an important challenge for the hospital information management who has to design and implement a suitable HIS architecture. Nevertheless, there are no tools available specializing in modeling information-driven business processes and the consequences on the communication between information processing, tools. Therefore, we will present an approach which facilitates the representation and analysis of business processes and resulting communication processes between application components and their interdependencies. This approach aims not only to visualize those processes, but to also to evaluate if there are weaknesses concerning the information processing infrastructure which hinder the smooth implementation of the business processes.
Power saver circuit for audio/visual signal unit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Right, R. W.
1985-02-12
A combined audio and visual signal unit with the audio and visual components actuated alternately and powered over a single cable pair in such a manner that only one of the audio and visual components is drawing power from the power supply at any given instant. Thus, the power supply is never called upon to provide more energy than that drawn by the one of the components having the greater power requirement. This is particularly advantageous when several combined audio and visual signal units are coupled in parallel on one cable pair. Typically, the signal unit may comprise a hornmore » and a strobe light for a fire alarm signalling system.« less
Stapleton, Tadhg; Connelly, Deirdre
2010-01-01
Practice in the area of predriving assessment for people with stroke varies, and research findings are not always easily transferred into the clinical setting, particularly when such assessment is not conducted within a dedicated driver assessment programme. This article explores the clinical predriving assessment practices and recommendations of a group of Irish occupational therapists for people with stroke. A consensus meeting of occupational therapists was facilitated using a nominal group technique (NGT) to identify specific components of cognition, perception, and executive function that may influence fitness to return to driving and should be assessed prior to referral for on-road evaluation. Standardised assessments for use in predriving assessment were recommended. Thirteen occupational therapists speed of processing; perceptual components of spatial awareness, depth perception, and visual inattention; and executive components of planning, problem solving, judgment, and self-awareness. Consensus emerged for the use of the following standardised tests: Behavioural Assessment of Dysexecutive Syndrome (BADS), Test of Everyday Attention (TEA), Brain Injury Visual Assessment Battery for Adults (biVABA), Rivermead Perceptual Assessment Battery (RPAB), and Motor Free Visual Perceptual Test (MVPT). Tests were recommended that gave an indication of the patient's underlying component skills in the area of cognition, perception, and executive functions considered important for driving. Further research is needed in this area to develop clinical practice guidelines for occupational therapists for the assessment of fitness to return to driving after stroke.
Portella, Claudio; Machado, Sergio; Arias-Carrión, Oscar; Sack, Alexander T.; Silva, Julio Guilherme; Orsini, Marco; Leite, Marco Antonio Araujo; Silva, Adriana Cardoso; Nardi, Antonio E.; Cagy, Mauricio; Piedade, Roberto; Ribeiro, Pedro
2012-01-01
The brain is capable of elaborating and executing different stages of information processing. However, exactly how these stages are processed in the brain remains largely unknown. This study aimed to analyze the possible correlation between early and late stages of information processing by assessing the latency to, and amplitude of, early and late event-related potential (ERP) components, including P200, N200, premotor potential (PMP) and P300, in healthy participants in the context of a visual oddball paradigm. We found a moderate positive correlation among the latency of P200 (electrode O2), N200 (electrode O2), PMP (electrode C3), P300 (electrode PZ) and the reaction time (RT). In addition, moderate negative correlation between the amplitude of P200 and the latencies of N200 (electrode O2), PMP (electrode C3), P300 (electrode PZ) was found. Therefore, we propose that if the secondary processing of visual input (P200 latency) occurs faster, the following will also happen sooner: discrimination and classification process of this input (N200 latency), motor response processing (PMP latency), reorganization of attention and working memory update (P300 latency), and RT. N200, PMP, and P300 latencies are also anticipated when higher activation level of occipital areas involved in the secondary processing of visual input rise (P200 amplitude). PMID:23355929
NASA Astrophysics Data System (ADS)
Becker, T.; König, G.
2015-10-01
Cartographic visualizations of crises are used to create a Common Operational Picture (COP) and enforce Situational Awareness by presenting relevant information to the involved actors. As nearly all crises affect geospatial entities, geo-data representations have to support location-specific analysis throughout the decision-making process. Meaningful cartographic presentation is needed for coordinating the activities of crisis manager in a highly dynamic situation, since operators' attention span and their spatial memories are limiting factors during the perception and interpretation process. Situational Awareness of operators in conjunction with a COP are key aspects in decision-making process and essential for making well thought-out and appropriate decisions. Considering utility networks as one of the most complex and particularly frequent required systems in urban environment, meaningful cartographic presentation of multiple utility networks with respect to disaster management do not exist. Therefore, an optimized visualization of utility infrastructure for emergency response procedures is proposed. The article will describe a conceptual approach on how to simplify, aggregate, and visualize multiple utility networks and their components to meet the requirements of the decision-making process and to support Situational Awareness.
Dickson, Danielle S.; Federmeier, Kara D.
2015-01-01
Differences in how the right and left hemispheres (RH, LH) apprehend visual words were examined using event-related potentials (ERPs) in a repetition paradigm with visual half-field (VF) presentation. In both hemispheres (RH/LVF, LH/RVF), initial presentation of items elicited similar and typical effects of orthographic neighborhood size, with larger N400s for orthographically regular items (words and pseudowords) than for irregular items (acronyms and meaningless illegal strings). However, hemispheric differences emerged on repetition effects. When items were repeated in the LH/RVF, orthographically regular items, relative to irregular items, elicited larger repetition effects on both the N250, a component reflecting processing at the level of visual form (orthography), and on the N400, which has been linked to semantic access. In contrast, in the RH/LVF, repetition effects were biased toward irregular items on the N250 and were similar in size across item types for the N400. The results suggest that processing in the LH is more strongly affected by wordform regularity than in the RH, either due to enhanced processing of familiar orthographic patterns or due to the fact that regular forms can be more readily mapped onto phonology. PMID:25278134
The control of attentional target selection in a colour/colour conjunction task.
Berggren, Nick; Eimer, Martin
2016-11-01
To investigate the time course of attentional object selection processes in visual search tasks where targets are defined by a combination of features from the same dimension, we measured the N2pc component as an electrophysiological marker of attentional object selection during colour/colour conjunction search. In Experiment 1, participants searched for targets defined by a combination of two colours, while ignoring distractor objects that matched only one of these colours. Reliable N2pc components were triggered by targets and also by partially matching distractors, even when these distractors were accompanied by a target in the same display. The target N2pc was initially equal in size to the sum of the two N2pc components to the two different types of partially matching distractors and became superadditive from approximately 250 ms after search display onset. Experiment 2 demonstrated that the superadditivity of the target N2pc was not due to a selective disengagement of attention from task-irrelevant partially matching distractors. These results indicate that attention was initially deployed separately and in parallel to all target-matching colours, before attentional allocation processes became sensitive to the presence of both matching colours within the same object. They suggest that attention can be controlled simultaneously and independently by multiple features from the same dimension and that feature-guided attentional selection processes operate in parallel for different target-matching objects in the visual field.
ERIC Educational Resources Information Center
Letyagin, Alexander
2015-01-01
The article deals with the problems of content and technological modernization that arise in the process of transition from the information paradigm of education to the activity one. A combined training model of class teaching using information, practice-based activity and visual components is offered as an example and a result of innovative…
LONI visualization environment.
Dinov, Ivo D; Valentino, Daniel; Shin, Bae Cheol; Konstantinidis, Fotios; Hu, Guogang; MacKenzie-Graham, Allan; Lee, Erh-Fang; Shattuck, David; Ma, Jeff; Schwartz, Craig; Toga, Arthur W
2006-06-01
Over the past decade, the use of informatics to solve complex neuroscientific problems has increased dramatically. Many of these research endeavors involve examining large amounts of imaging, behavioral, genetic, neurobiological, and neuropsychiatric data. Superimposing, processing, visualizing, or interpreting such a complex cohort of datasets frequently becomes a challenge. We developed a new software environment that allows investigators to integrate multimodal imaging data, hierarchical brain ontology systems, on-line genetic and phylogenic databases, and 3D virtual data reconstruction models. The Laboratory of Neuro Imaging visualization environment (LONI Viz) consists of the following components: a sectional viewer for imaging data, an interactive 3D display for surface and volume rendering of imaging data, a brain ontology viewer, and an external database query system. The synchronization of all components according to stereotaxic coordinates, region name, hierarchical ontology, and genetic labels is achieved via a comprehensive BrainMapper functionality, which directly maps between position, structure name, database, and functional connectivity information. This environment is freely available, portable, and extensible, and may prove very useful for neurobiologists, neurogenetisists, brain mappers, and for other clinical, pedagogical, and research endeavors.
Functional significance of the emotion-related late positive potential
Brown, Stephen B. R. E.; van Steenbergen, Henk; Band, Guido P. H.; de Rover, Mischa; Nieuwenhuis, Sander
2012-01-01
The late positive potential (LPP) is an event-related potential (ERP) component over visual cortical areas that is modulated by the emotional intensity of a stimulus. However, the functional significance of this neural modulation remains elusive. We conducted two experiments in which we studied the relation between LPP amplitude, subsequent perceptual sensitivity to a non-emotional stimulus (Experiment 1) and visual cortical excitability, as reflected by P1/N1 components evoked by this stimulus (Experiment 2). During the LPP modulation elicited by unpleasant stimuli, perceptual sensitivity was not affected. In contrast, we found some evidence for a decreased N1 amplitude during the LPP modulation, a decreased P1 amplitude on trials with a relatively large LPP, and consistent negative (but non-significant) across-subject correlations between the magnitudes of the LPP modulation and corresponding changes in d-prime or P1/N1 amplitude. The results provide preliminary evidence that the LPP reflects a global inhibition of activity in visual cortex, resulting in the selective survival of activity associated with the processing of the emotional stimulus. PMID:22375117
Daffner, Kirk R; Alperin, Brittany R; Mott, Katherine K; Holcomb, Phillip J
2014-01-22
Older adults exhibit diminished ability to inhibit the processing of visual stimuli that are supposed to be ignored. The extent to which age-related changes in early visual processing contribute to impairments in selective attention remains to be determined. Here, 103 adults, 18-85 years of age, completed a color selective attention task in which they were asked to attend to a specified color and respond to designated target letters. An optimal approach would be to initially filter according to color and then process letter forms in the attend color to identify targets. An asymmetric N170 ERP component (larger amplitude over left posterior hemisphere sites) was used as a marker of the early automatic processing of letter forms. Young and middle-aged adults did not generate an asymmetric N170 component. In contrast, young-old and old-old adults produced a larger N170 over the left hemisphere. Furthermore, older adults generated a larger N170 to letter than nonletter stimuli over the left, but not right hemisphere. More asymmetric N170 responses predicted greater allocation of late selection resources to target letters in the ignore color, as indexed by P3b amplitude. These results suggest that unlike their younger counterparts, older adults automatically process stimuli as letters early in the selection process, when it would be more efficient to attend to color only. The inability to ignore letters early in the processing stream helps explain the age-related increase in subsequent processing of target letter forms presented in the ignore color.
Keysers, C; Xiao, D-K; Foldiak, P; Perrett, D I
2005-05-01
Iconic memory, the short-lasting visual memory of a briefly flashed stimulus, is an important component of most models of visual perception. Here we investigate what physiological mechanisms underlie this capacity by showing rapid serial visual presentation (RSVP) sequences with and without interstimulus gaps to human observers and macaque monkeys. For gaps of up to 93 ms between consecutive images, human observers and neurones in the temporal cortex of macaque monkeys were found to continue processing a stimulus as if it was still present on the screen. The continued firing of neurones in temporal cortex may therefore underlie iconic memory. Based on these findings, a neurophysiological vision of iconic memory is presented.
Szécsi, László; Kacsó, Ágota; Zeck, Günther; Hantz, Péter
2017-01-01
Light stimulation with precise and complex spatial and temporal modulation is demanded by a series of research fields like visual neuroscience, optogenetics, ophthalmology, and visual psychophysics. We developed a user-friendly and flexible stimulus generating framework (GEARS GPU-based Eye And Retina Stimulation Software), which offers access to GPU computing power, and allows interactive modification of stimulus parameters during experiments. Furthermore, it has built-in support for driving external equipment, as well as for synchronization tasks, via USB ports. The use of GEARS does not require elaborate programming skills. The necessary scripting is visually aided by an intuitive interface, while the details of the underlying software and hardware components remain hidden. Internally, the software is a C++/Python hybrid using OpenGL graphics. Computations are performed on the GPU, and are defined in the GLSL shading language. However, all GPU settings, including the GPU shader programs, are automatically generated by GEARS. This is configured through a method encountered in game programming, which allows high flexibility: stimuli are straightforwardly composed using a broad library of basic components. Stimulus rendering is implemented solely in C++, therefore intermediary libraries for interfacing could be omitted. This enables the program to perform computationally demanding tasks like en-masse random number generation or real-time image processing by local and global operations. PMID:29326579
Spatio-Chromatic Adaptation via Higher-Order Canonical Correlation Analysis of Natural Images
Gutmann, Michael U.; Laparra, Valero; Hyvärinen, Aapo; Malo, Jesús
2014-01-01
Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation. PMID:24533049
Spatio-chromatic adaptation via higher-order canonical correlation analysis of natural images.
Gutmann, Michael U; Laparra, Valero; Hyvärinen, Aapo; Malo, Jesús
2014-01-01
Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation.
Early and late selection processes have separable influences on the neural substrates of attention.
Drisdelle, Brandi Lee; Jolicoeur, Pierre
2018-05-01
To improve our understanding of the mechanisms of target selection, we examined how the spatial separation of salient items and their similarity to a pre-defined target interact using lateralised electrophysiological correlates of visual spatial attention (N2pc component) and visual short-term memory (VSTM; SPCN component). Using these features of target selection, we sought to expand on previous work proposing a model of early and late selection, where the N2pc is suggested to reflect the selection probability of visual stimuli (Aubin and Jolicoeur, 2016). The authors suggested that early-selection processes could be enhanced when items are adjacent. In the present work, the stimuli were short oriented lines, all of which were grey except for two that were blue and hence salient. A decrease in N2pc amplitude with decreasing spatial separation between salient items was observed. The N2pc increased in amplitude with increasing similarity of salient distractors to the target template, but only in target-absent trials. There was no interaction between these two factors, suggesting that separable attentional mechanisms influenced the N2pc. The findings suggest that selection is initially based on easily-distinguished attributes (i.e., both blue items) followed by a later identification-based process (if necessary), which depends on feature similarity to a target template. For the SPCN component, the results were in line with previous work: for target-present trials, an increase in similarity of salient distractors was associated with an increase in SPCN amplitude, suggesting more information was maintained in VSTM. In sum, results suggest there is a need for further inspection of salient distractors when they are similar to the target, increasing the need for focal attention, demonstrated by an increase in N2pc amplitude, followed by a higher probability of transfer to VSTM, demonstrated by an increase in SPCN amplitude. Copyright © 2018 Elsevier B.V. All rights reserved.
Sewell, David K; Lilburn, Simon D; Smith, Philip L
2016-11-01
A central question in working memory research concerns the degree to which information in working memory is accessible to other cognitive processes (e.g., decision-making). Theories assuming that the focus of attention can only store a single object at a time require the focus to orient to a target representation before further processing can occur. The need to orient the focus of attention implies that single-object accounts typically predict response time costs associated with object selection even when working memory is not full (i.e., memory load is less than 4 items). For other theories that assume storage of multiple items in the focus of attention, predictions depend on specific assumptions about the way resources are allocated among items held in the focus, and how this affects the time course of retrieval of items from the focus. These broad theoretical accounts have been difficult to distinguish because conventional analyses fail to separate components of empirical response times related to decision-making from components related to selection and retrieval processes associated with accessing information in working memory. To better distinguish these response time components from one another, we analyze data from a probed visual working memory task using extensions of the diffusion decision model. Analysis of model parameters revealed that increases in memory load resulted in (a) reductions in the quality of the underlying stimulus representations in a manner consistent with a sample size model of visual working memory capacity and (b) systematic increases in the time needed to selectively access a probed representation in memory. The results are consistent with single-object theories of the focus of attention. The results are also consistent with a subset of theories that assume a multiobject focus of attention in which resource allocation diminishes both the quality and accessibility of the underlying representations. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Neural correlates of emotional intelligence in a visual emotional oddball task: an ERP study.
Raz, Sivan; Dan, Orrie; Zysberg, Leehu
2014-11-01
The present study was aimed at identifying potential behavioral and neural correlates of Emotional Intelligence (EI) by using scalp-recorded Event-Related Potentials (ERPs). EI levels were defined according to both self-report questionnaire and a performance-based ability test. We identified ERP correlates of emotional processing by using a visual-emotional oddball paradigm, in which subjects were confronted with one frequent standard stimulus (a neutral face) and two deviant stimuli (a happy and an angry face). The effects of these faces were then compared across groups with low and high EI levels. The ERP results indicate that participants with high EI exhibited significantly greater mean amplitudes of the P1, P2, N2, and P3 ERP components in response to emotional and neutral faces, at frontal, posterior-parietal and occipital scalp locations. P1, P2 and N2 are considered indexes of attention-related processes and have been associated with early attention to emotional stimuli. The later P3 component has been thought to reflect more elaborative, top-down, emotional information processing including emotional evaluation and memory encoding and formation. These results may suggest greater recruitment of resources to process all emotional and non-emotional faces at early and late processing stages among individuals with higher EI. The present study underscores the usefulness of ERP methodology as a sensitive measure for the study of emotional stimuli processing in the research field of EI. Copyright © 2014 Elsevier Inc. All rights reserved.
Virtual hydrology observatory: an immersive visualization of hydrology modeling
NASA Astrophysics Data System (ADS)
Su, Simon; Cruz-Neira, Carolina; Habib, Emad; Gerndt, Andreas
2009-02-01
The Virtual Hydrology Observatory will provide students with the ability to observe the integrated hydrology simulation with an instructional interface by using a desktop based or immersive virtual reality setup. It is the goal of the virtual hydrology observatory application to facilitate the introduction of field experience and observational skills into hydrology courses through innovative virtual techniques that mimic activities during actual field visits. The simulation part of the application is developed from the integrated atmospheric forecast model: Weather Research and Forecasting (WRF), and the hydrology model: Gridded Surface/Subsurface Hydrologic Analysis (GSSHA). Both the output from WRF and GSSHA models are then used to generate the final visualization components of the Virtual Hydrology Observatory. The various visualization data processing techniques provided by VTK are 2D Delaunay triangulation and data optimization. Once all the visualization components are generated, they are integrated into the simulation data using VRFlowVis and VR Juggler software toolkit. VR Juggler is used primarily to provide the Virtual Hydrology Observatory application with fully immersive and real time 3D interaction experience; while VRFlowVis provides the integration framework for the hydrologic simulation data, graphical objects and user interaction. A six-sided CAVETM like system is used to run the Virtual Hydrology Observatory to provide the students with a fully immersive experience.
The correlation dimension: a useful objective measure of the transient visual evoked potential?
Boon, Mei Ying; Henry, Bruce I; Suttle, Catherine M; Dain, Stephen J
2008-01-14
Visual evoked potentials (VEPs) may be analyzed by examination of the morphology of their components, such as negative (N) and positive (P) peaks. However, methods that rely on component identification may be unreliable when dealing with responses of complex and variable morphology; therefore, objective methods are also useful. One potentially useful measure of the VEP is the correlation dimension. Its relevance to the visual system was investigated by examining its behavior when applied to the transient VEP in response to a range of chromatic contrasts (42%, two times psychophysical threshold, at psychophysical threshold) and to the visually unevoked response (zero contrast). Tests of nonlinearity (e.g., surrogate testing) were conducted. The correlation dimension was found to be negatively correlated with a stimulus property (chromatic contrast) and a known linear measure (the Fourier-derived VEP amplitude). It was also found to be related to visibility and perception of the stimulus such that the dimension reached a maximum for most of the participants at psychophysical threshold. The latter suggests that the correlation dimension may be useful as a diagnostic parameter to estimate psychophysical threshold and may find application in the objective screening and monitoring of congenital and acquired color vision deficiencies, with or without associated disease processes.
Sample to answer visualization pipeline for low-cost point-of-care blood cell counting
NASA Astrophysics Data System (ADS)
Smith, Suzanne; Naidoo, Thegaran; Davies, Emlyn; Fourie, Louis; Nxumalo, Zandile; Swart, Hein; Marais, Philip; Land, Kevin; Roux, Pieter
2015-03-01
We present a visualization pipeline from sample to answer for point-of-care blood cell counting applications. Effective and low-cost point-of-care medical diagnostic tests provide developing countries and rural communities with accessible healthcare solutions [1], and can be particularly beneficial for blood cell count tests, which are often the starting point in the process of diagnosing a patient [2]. The initial focus of this work is on total white and red blood cell counts, using a microfluidic cartridge [3] for sample processing. Analysis of the processed samples has been implemented by means of two main optical visualization systems developed in-house: 1) a fluidic operation analysis system using high speed video data to determine volumes, mixing efficiency and flow rates, and 2) a microscopy analysis system to investigate homogeneity and concentration of blood cells. Fluidic parameters were derived from the optical flow [4] as well as color-based segmentation of the different fluids using a hue-saturation-value (HSV) color space. Cell count estimates were obtained using automated microscopy analysis and were compared to a widely accepted manual method for cell counting using a hemocytometer [5]. The results using the first iteration microfluidic device [3] showed that the most simple - and thus low-cost - approach for microfluidic component implementation was not adequate as compared to techniques based on manual cell counting principles. An improved microfluidic design has been developed to incorporate enhanced mixing and metering components, which together with this work provides the foundation on which to successfully implement automated, rapid and low-cost blood cell counting tests.
Object-based target templates guide attention during visual search.
Berggren, Nick; Eimer, Martin
2018-05-03
During visual search, attention is believed to be controlled in a strictly feature-based fashion, without any guidance by object-based target representations. To challenge this received view, we measured electrophysiological markers of attentional selection (N2pc component) and working memory (sustained posterior contralateral negativity; SPCN) in search tasks where two possible targets were defined by feature conjunctions (e.g., blue circles and green squares). Critically, some search displays also contained nontargets with two target features (incorrect conjunction objects, e.g., blue squares). Because feature-based guidance cannot distinguish these objects from targets, any selective bias for targets will reflect object-based attentional control. In Experiment 1, where search displays always contained only one object with target-matching features, targets and incorrect conjunction objects elicited identical N2pc and SPCN components, demonstrating that attentional guidance was entirely feature-based. In Experiment 2, where targets and incorrect conjunction objects could appear in the same display, clear evidence for object-based attentional control was found. The target N2pc became larger than the N2pc to incorrect conjunction objects from 250 ms poststimulus, and only targets elicited SPCN components. This demonstrates that after an initial feature-based guidance phase, object-based templates are activated when they are required to distinguish target and nontarget objects. These templates modulate visual processing and control access to working memory, and their activation may coincide with the start of feature integration processes. Results also suggest that while multiple feature templates can be activated concurrently, only a single object-based target template can guide attention at any given time. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Integration of genomic and medical data into a 3D atlas of human anatomy.
Turinsky, Andrei L; Fanea, Elena; Trinh, Quang; Dong, Xiaoli; Stromer, Julie N; Shu, Xueling; Wat, Stephen; Hallgrímsson, Benedikt; Hill, Jonathan W; Edwards, Carol; Grosenick, Brenda; Yajima, Masumi; Sensen, Christoph W
2008-01-01
We have developed a framework for the visual integration and exploration of multi-scale biomedical data, which includes anatomical and molecular components. We have also created a Java-based software system that integrates molecular information, such as gene expression data, into a three-dimensional digital atlas of the male adult human anatomy. Our atlas is structured according to the Terminologia Anatomica. The underlying data-indexing mechanism uses open standards and semantic ontology-processing tools to establish the associations between heterogeneous data types. The software system makes an extensive use of virtual reality visualization.
Lam, Philippe; Stern, Al
2010-01-01
We developed several techniques for visualizing the fit between a stopper and a vial in the critical flange area, a location typically hidden from view. Using these tools, it is possible to identify surfaces involved in forming the initial seal immediately after stopper insertion. We present examples illustrating important design elements that can contribute to forming a robust primary package. These techniques can also be used for component screening by facilitating the identification of combinations that do not fit well together so that they can be eliminated early in the selection process.
Principal components colour display of ERTS imagery
NASA Technical Reports Server (NTRS)
Taylor, M. M.
1974-01-01
In the technique presented, colours are not derived from single bands, but rather from independent linear combinations of the bands. Using a simple model of the processing done by the visual system, three informationally independent linear combinations of the four ERTS bands are mapped onto the three visual colour dimensions of brightness, redness-greenness and blueness-yellowness. The technique permits user-specific transformations which enhance particular features, but this is not usually needed, since a single transformation provides a picture which conveys much of the information implicit in the ERTS data. Examples of experimental vector images with matched individual band images are shown.
Evidence for two attentional components in visual working memory.
Allen, Richard J; Baddeley, Alan D; Hitch, Graham J
2014-11-01
How does executive attentional control contribute to memory for sequences of visual objects, and what does this reveal about storage and processing in working memory? Three experiments examined the impact of a concurrent executive load (backward counting) on memory for sequences of individually presented visual objects. Experiments 1 and 2 found disruptive concurrent load effects of equivalent magnitude on memory for shapes, colors, and colored shape conjunctions (as measured by single-probe recognition). These effects were present only for Items 1 and 2 in a 3-item sequence; the final item was always impervious to this disruption. This pattern of findings was precisely replicated in Experiment 3 when using a cued verbal recall measure of shape-color binding, with error analysis providing additional insights concerning attention-related loss of early-sequence items. These findings indicate an important role for executive processes in maintaining representations of earlier encountered stimuli in an active form alongside privileged storage of the most recent stimulus. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Predicting Aggressive Tendencies by Visual Attention Bias Associated with Hostile Emotions
Lin, Ping-I; Hsieh, Cheng-Da; Juan, Chi-Hung; Hossain, Md Monir; Erickson, Craig A.; Lee, Yang-Han; Su, Mu-Chun
2016-01-01
The goal of the current study is to clarify the relationship between social information processing (e.g., visual attention to cues of hostility, hostility attribution bias, and facial expression emotion labeling) and aggressive tendencies. Thirty adults were recruited in the eye-tracking study that measured various components in social information processing. Baseline aggressive tendencies were measured using the Buss-Perry Aggression Questionnaire (AQ). Visual attention towards hostile objects was measured as the proportion of eye gaze fixation duration on cues of hostility. Hostility attribution bias was measured with the rating results for emotions of characters in the images. The results show that the eye gaze duration on hostile characters was significantly inversely correlated with the AQ score and less eye contact with an angry face. The eye gaze duration on hostile object was not significantly associated with hostility attribution bias, although hostility attribution bias was significantly positively associated with the AQ score. Our findings suggest that eye gaze fixation time towards non-hostile cues may predict aggressive tendencies. PMID:26901770
Predicting Aggressive Tendencies by Visual Attention Bias Associated with Hostile Emotions.
Lin, Ping-I; Hsieh, Cheng-Da; Juan, Chi-Hung; Hossain, Md Monir; Erickson, Craig A; Lee, Yang-Han; Su, Mu-Chun
2016-01-01
The goal of the current study is to clarify the relationship between social information processing (e.g., visual attention to cues of hostility, hostility attribution bias, and facial expression emotion labeling) and aggressive tendencies. Thirty adults were recruited in the eye-tracking study that measured various components in social information processing. Baseline aggressive tendencies were measured using the Buss-Perry Aggression Questionnaire (AQ). Visual attention towards hostile objects was measured as the proportion of eye gaze fixation duration on cues of hostility. Hostility attribution bias was measured with the rating results for emotions of characters in the images. The results show that the eye gaze duration on hostile characters was significantly inversely correlated with the AQ score and less eye contact with an angry face. The eye gaze duration on hostile object was not significantly associated with hostility attribution bias, although hostility attribution bias was significantly positively associated with the AQ score. Our findings suggest that eye gaze fixation time towards non-hostile cues may predict aggressive tendencies.
Rapid modulation of spoken word recognition by visual primes.
Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J
2016-02-01
In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics.
Rapid modulation of spoken word recognition by visual primes
Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J.
2015-01-01
In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics. PMID:26516296
VISIBIOweb: visualization and layout services for BioPAX pathway models
Dilek, Alptug; Belviranli, Mehmet E.; Dogrusoz, Ugur
2010-01-01
With recent advancements in techniques for cellular data acquisition, information on cellular processes has been increasing at a dramatic rate. Visualization is critical to analyzing and interpreting complex information; representing cellular processes or pathways is no exception. VISIBIOweb is a free, open-source, web-based pathway visualization and layout service for pathway models in BioPAX format. With VISIBIOweb, one can obtain well-laid-out views of pathway models using the standard notation of the Systems Biology Graphical Notation (SBGN), and can embed such views within one's web pages as desired. Pathway views may be navigated using zoom and scroll tools; pathway object properties, including any external database references available in the data, may be inspected interactively. The automatic layout component of VISIBIOweb may also be accessed programmatically from other tools using Hypertext Transfer Protocol (HTTP). The web site is free and open to all users and there is no login requirement. It is available at: http://visibioweb.patika.org. PMID:20460470
Duggan, Brendan M; Rae, Anne M; Clements, Dylan N; Hocking, Paul M
2017-05-02
Genetic progress in selection for greater body mass and meat yield in poultry has been associated with an increase in gait problems which are detrimental to productivity and welfare. The incidence of suboptimal gait in breeding flocks is controlled through the use of a visual gait score, which is a subjective assessment of walking ability of each bird. The subjective nature of the visual gait score has led to concerns over its effectiveness in reducing the incidence of suboptimal gait in poultry through breeding. The aims of this study were to assess the reliability of the current visual gait scoring system in ducks and to develop a more objective method to select for better gait. Experienced gait scorers assessed short video clips of walking ducks to estimate the reliability of the current visual gait scoring system. Kendall's coefficients of concordance between and within observers were estimated at 0.49 and 0.75, respectively. In order to develop a more objective scoring system, gait components were visually scored on more than 4000 pedigreed Pekin ducks and genetic parameters were estimated for these components. Gait components, which are a more objective measure, had heritabilities that were as good as, or better than, those of the overall visual gait score. Measurement of gait components is simpler and therefore more objective than the standard visual gait score. The recording of gait components can potentially be automated, which may increase accuracy further and may improve heritability estimates. Genetic correlations were generally low, which suggests that it is possible to use gait components to select for an overall improvement in both economic traits and gait as part of a balanced breeding programme.
Category-based guidance of spatial attention during visual search for feature conjunctions.
Nako, Rebecca; Grubert, Anna; Eimer, Martin
2016-10-01
The question whether alphanumerical category is involved in the control of attentional target selection during visual search remains a contentious issue. We tested whether category-based attentional mechanisms would guide the allocation of attention under conditions where targets were defined by a combination of alphanumerical category and a basic visual feature, and search displays could contain both targets and partially matching distractor objects. The N2pc component was used as an electrophysiological marker of attentional object selection in tasks where target objects were defined by a conjunction of color and category (Experiment 1) or shape and category (Experiment 2). Some search displays contained the target or a nontarget object that matched either the target color/shape or its category among 3 nonmatching distractors. In other displays, the target and a partially matching nontarget object appeared together. N2pc components were elicited not only by targets and by color- or shape-matching nontargets, but also by category-matching nontarget objects, even on trials where a target was present in the same display. On these trials, the summed N2pc components to the 2 types of partially matching nontargets were initially equal in size to the target N2pc, suggesting that attention was allocated simultaneously and independently to all objects with target-matching features during the early phase of attentional processing. Results demonstrate that alphanumerical category is a genuine guiding feature that can operate in parallel with color or shape information to control the deployment of attention during visual search. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Dynamics of the spatial scale of visual attention revealed by brain event-related potentials
NASA Technical Reports Server (NTRS)
Luo, Y. J.; Greenwood, P. M.; Parasuraman, R.
2001-01-01
The temporal dynamics of the spatial scaling of attention during visual search were examined by recording event-related potentials (ERPs). A total of 16 young participants performed a search task in which the search array was preceded by valid cues that varied in size and hence in precision of target localization. The effects of cue size on short-latency (P1 and N1) ERP components, and the time course of these effects with variation in cue-target stimulus onset asynchrony (SOA), were examined. Reaction time (RT) to discriminate a target was prolonged as cue size increased. The amplitudes of the posterior P1 and N1 components of the ERP evoked by the search array were affected in opposite ways by the size of the precue: P1 amplitude increased whereas N1 amplitude decreased as cue size increased, particularly following the shortest SOA. The results show that when top-down information about the region to be searched is less precise (larger cues), RT is slowed and the neural generators of P1 become more active, reflecting the additional computations required in changing the spatial scale of attention to the appropriate element size to facilitate target discrimination. In contrast, the decrease in N1 amplitude with cue size may reflect a broadening of the spatial gradient of attention. The results provide electrophysiological evidence that changes in the spatial scale of attention modulate neural activity in early visual cortical areas and activate at least two temporally overlapping component processes during visual search.
Visual attention spreads broadly but selects information locally.
Shioiri, Satoshi; Honjyo, Hajime; Kashiwase, Yoshiyuki; Matsumiya, Kazumichi; Kuriki, Ichiro
2016-10-19
Visual attention spreads over a range around the focus as the spotlight metaphor describes. Spatial spread of attentional enhancement and local selection/inhibition are crucial factors determining the profile of the spatial attention. Enhancement and ignorance/suppression are opposite effects of attention, and appeared to be mutually exclusive. Yet, no unified view of the factors has been provided despite their necessity for understanding the functions of spatial attention. This report provides electroencephalographic and behavioral evidence for the attentional spread at an early stage and selection/inhibition at a later stage of visual processing. Steady state visual evoked potential showed broad spatial tuning whereas the P3 component of the event related potential showed local selection or inhibition of the adjacent areas. Based on these results, we propose a two-stage model of spatial attention with broad spread at an early stage and local selection at a later stage.
Cronly-Dillon, J; Persaud, K; Gregory, R P
1999-01-01
This study demonstrates the ability of blind (previously sighted) and blindfolded (sighted) subjects in reconstructing and identifying a number of visual targets transformed into equivalent musical representations. Visual images are deconstructed through a process which selectively segregates different features of the image into separate packages. These are then encoded in sound and presented as a polyphonic musical melody which resembles a Baroque fugue with many voices, allowing subjects to analyse the component voices selectively in combination, or separately in sequence, in a manner which allows a subject to patch together and bind the different features of the object mentally into a mental percept of a single recognizable entity. The visual targets used in this study included a variety of geometrical figures, simple high-contrast line drawings of man-made objects, natural and urban scenes, etc., translated into sound and presented to the subject in polyphonic musical form. PMID:10643086
Perceptual expertise and top-down expectation of musical notation engages the primary visual cortex.
Wong, Yetta Kwailing; Peng, Cynthia; Fratus, Kristyn N; Woodman, Geoffrey F; Gauthier, Isabel
2014-08-01
Most theories of visual processing propose that object recognition is achieved in higher visual cortex. However, we show that category selectivity for musical notation can be observed in the first ERP component called the C1 (measured 40-60 msec after stimulus onset) with music-reading expertise. Moreover, the C1 note selectivity was observed only when the stimulus category was blocked but not when the stimulus category was randomized. Under blocking, the C1 activity for notes predicted individual music-reading ability, and behavioral judgments of musical stimuli reflected music-reading skill. Our results challenge current theories of object recognition, indicating that the primary visual cortex can be selective for musical notation within the initial feedforward sweep of activity with perceptual expertise and with a testing context that is consistent with the expertise training, such as blocking the stimulus category for music reading.
Designing sound and visual components for enhancement of urban soundscapes.
Hong, Joo Young; Jeon, Jin Yong
2013-09-01
The aim of this study is to investigate the effect of audio-visual components on environmental quality to improve soundscape. Natural sounds with road traffic noise and visual components in urban streets were evaluated through laboratory experiments. Waterfall and stream water sounds, as well as bird sounds, were selected to enhance the soundscape. Sixteen photomontages of a streetscape were constructed in combination with two types of water features and three types of vegetation which were chosen as positive visual components. The experiments consisted of audio-only, visual-only, and audio-visual conditions. The preferences and environmental qualities of the stimuli were evaluated by a numerical scale and 12 pairs of adjectives, respectively. The results showed that bird sounds were the most preferred among the natural sounds, while the sound of falling water was found to degrade the soundscape quality when the road traffic noise level was high. The visual effects of vegetation on aesthetic preference were significant, but those of water features relatively small. It was revealed that the perceptual dimensions of the environment were different from the noise levels. Particularly, the acoustic comfort factor related to soundscape quality considerably influenced preference for the overall environment at a higher level of road traffic noise.
Color categories affect pre-attentive color perception.
Clifford, Alexandra; Holmes, Amanda; Davies, Ian R L; Franklin, Anna
2010-10-01
Categorical perception (CP) of color is the faster and/or more accurate discrimination of colors from different categories than equivalently spaced colors from the same category. Here, we investigate whether color CP at early stages of chromatic processing is independent of top-down modulation from attention. A visual oddball task was employed where frequent and infrequent colored stimuli were either same- or different-category, with chromatic differences equated across conditions. Stimuli were presented peripheral to a central distractor task to elicit an event-related potential (ERP) known as the visual mismatch negativity (vMMN). The vMMN is an index of automatic and pre-attentive visual change detection arising from generating loci in visual cortices. The results revealed a greater vMMN for different-category than same-category change detection when stimuli appeared in the lower visual field, and an absence of attention-related ERP components. The findings provide the first clear evidence for an automatic and pre-attentive categorical code for color. Copyright © 2010 Elsevier B.V. All rights reserved.
Phenomenological reliving and visual imagery during autobiographical recall in Alzheimer’s disease
El Haj, Mohamad; Kapogiannis, Dimitrios; Antoine, Pascal
2016-01-01
Multiple studies have shown compromise of autobiographical memory and phenomenological reliving in Alzheimer’s disease (AD). We investigated various phenomenological features of autobiographical memory to determine their relative vulnerability in AD. To this aim, participants with early AD and cognitively normal older adult controls were asked to retrieve an autobiographical event and rate on a 5-point scale metacognitive judgments (i.e., reliving, back in time, remembering, and realness), component processes (i.e., visual imagery, auditory imagery, language, and emotion), narrative properties (i.e., rehearsal and importance), and spatiotemporal specificity (i.e., spatial details and temporal details). AD participants showed lower general autobiographical recall than controls, and poorer reliving, travel in time, remembering, realness, visual imagery, auditory imagery, language, rehearsal, and spatial detail – a decrease that was especially pronounced for visual imagery. Yet, AD participants showed high rating for emotion and importance. Early AD seems to compromise many phenomenological features, especially visual imagery, but also seems to preserve some other features. PMID:27003216
Phenomenological Reliving and Visual Imagery During Autobiographical Recall in Alzheimer's Disease.
El Haj, Mohamad; Kapogiannis, Dimitrios; Antoine, Pascal
2016-03-16
Multiple studies have shown compromise of autobiographical memory and phenomenological reliving in Alzheimer's disease (AD). We investigated various phenomenological features of autobiographical memory to determine their relative vulnerability in AD. To this aim, participants with early AD and cognitively normal older adult controls were asked to retrieve an autobiographical event and rate on a five-point scale metacognitive judgments (i.e., reliving, back in time, remembering, and realness), component processes (i.e., visual imagery, auditory imagery, language, and emotion), narrative properties (i.e., rehearsal and importance), and spatiotemporal specificity (i.e., spatial details and temporal details). AD participants showed lower general autobiographical recall than controls, and poorer reliving, travel in time, remembering, realness, visual imagery, auditory imagery, language, rehearsal, and spatial detail-a decrease that was especially pronounced for visual imagery. Yet, AD participants showed high rating for emotion and importance. Early AD seems to compromise many phenomenological features, especially visual imagery, but also seems to preserve some other features.
Kullmann, Stephanie; Pape, Anna-Antonia; Heni, Martin; Ketterer, Caroline; Schick, Fritz; Häring, Hans-Ulrich; Fritsche, Andreas; Preissl, Hubert; Veit, Ralf
2013-05-01
In order to adequately explore the neurobiological basis of eating behavior of humans and their changes with body weight, interactions between brain areas or networks need to be investigated. In the current functional magnetic resonance imaging study, we examined the modulating effects of stimulus category (food vs. nonfood), caloric content of food, and body weight on the time course and functional connectivity of 5 brain networks by means of independent component analysis in healthy lean and overweight/obese adults. These functional networks included motor sensory, default-mode, extrastriate visual, temporal visual association, and salience networks. We found an extensive modulation elicited by food stimuli in the 2 visual and salience networks, with a dissociable pattern in the time course and functional connectivity between lean and overweight/obese subjects. Specifically, only in lean subjects, the temporal visual association network was modulated by the stimulus category and the salience network by caloric content, whereas overweight and obese subjects showed a generalized augmented response in the salience network. Furthermore, overweight/obese subjects showed changes in functional connectivity in networks important for object recognition, motivational salience, and executive control. These alterations could potentially lead to top-down deficiencies driving the overconsumption of food in the obese population.
Chang, Yu-Cherng C; Khan, Sheraz; Taulu, Samu; Kuperberg, Gina; Brown, Emery N; Hämäläinen, Matti S; Temereanca, Simona
2018-01-01
Saccadic eye movements are an inherent component of natural reading, yet their contribution to information processing at subsequent fixation remains elusive. Here we use anatomically-constrained magnetoencephalography (MEG) to examine cortical activity following saccades as healthy human subjects engaged in a one-back word recognition task. This activity was compared with activity following external visual stimulation that mimicked saccades. A combination of procedures was employed to eliminate saccadic ocular artifacts from the MEG signal. Both saccades and saccade-like external visual stimulation produced early-latency responses beginning ~70 ms after onset in occipital cortex and spreading through the ventral and dorsal visual streams to temporal, parietal and frontal cortices. Robust differential activity following the onset of saccades vs. similar external visual stimulation emerged during 150-350 ms in a left-lateralized cortical network. This network included: (i) left lateral occipitotemporal (LOT) and nearby inferotemporal (IT) cortex; (ii) left posterior Sylvian fissure (PSF) and nearby multimodal cortex; and (iii) medial parietooccipital (PO), posterior cingulate and retrosplenial cortices. Moreover, this left-lateralized network colocalized with word repetition priming effects. Together, results suggest that central saccadic mechanisms influence a left-lateralized language network in occipitotemporal and temporal cortex above and beyond saccadic influences at preceding stages of information processing during visual word recognition.
Chang, Yu-Cherng C.; Khan, Sheraz; Taulu, Samu; Kuperberg, Gina; Brown, Emery N.; Hämäläinen, Matti S.; Temereanca, Simona
2018-01-01
Saccadic eye movements are an inherent component of natural reading, yet their contribution to information processing at subsequent fixation remains elusive. Here we use anatomically-constrained magnetoencephalography (MEG) to examine cortical activity following saccades as healthy human subjects engaged in a one-back word recognition task. This activity was compared with activity following external visual stimulation that mimicked saccades. A combination of procedures was employed to eliminate saccadic ocular artifacts from the MEG signal. Both saccades and saccade-like external visual stimulation produced early-latency responses beginning ~70 ms after onset in occipital cortex and spreading through the ventral and dorsal visual streams to temporal, parietal and frontal cortices. Robust differential activity following the onset of saccades vs. similar external visual stimulation emerged during 150–350 ms in a left-lateralized cortical network. This network included: (i) left lateral occipitotemporal (LOT) and nearby inferotemporal (IT) cortex; (ii) left posterior Sylvian fissure (PSF) and nearby multimodal cortex; and (iii) medial parietooccipital (PO), posterior cingulate and retrosplenial cortices. Moreover, this left-lateralized network colocalized with word repetition priming effects. Together, results suggest that central saccadic mechanisms influence a left-lateralized language network in occipitotemporal and temporal cortex above and beyond saccadic influences at preceding stages of information processing during visual word recognition. PMID:29867372
Young children's recall and reconstruction of audio and audiovisual narratives.
Gibbons, J; Anderson, D R; Smith, R; Field, D E; Fischer, C
1986-08-01
It has been claimed that the visual component of audiovisual media dominates young children's cognitive processing. This experiment examines the effects of input modality while controlling the complexity of the visual and auditory content and while varying the comprehension task (recall vs. reconstruction). 4- and 7-year-olds were presented brief stories through either audio or audiovisual media. The audio version consisted of narrated character actions and character utterances. The narrated actions were matched to the utterances on the basis of length and propositional complexity. The audiovisual version depicted the actions visually by means of stop animation instead of by auditory narrative statements. The character utterances were the same in both versions. Audiovisual input produced superior performance on explicit information in the 4-year-olds and produced more inferences at both ages. Because performance on utterances was superior in the audiovisual condition as compared to the audio condition, there was no evidence that visual input inhibits processing of auditory information. Actions were more likely to be produced by the younger children than utterances, regardless of input medium, indicating that prior findings of visual dominance may have been due to the salience of narrative action. Reconstruction, as compared to recall, produced superior depiction of actions at both ages as well as more constrained relevant inferences and narrative conventions.
NASA Astrophysics Data System (ADS)
Rogowitz, Bernice E.; Matasci, Naim
2011-03-01
The explosion of online scientific data from experiments, simulations, and observations has given rise to an avalanche of algorithmic, visualization and imaging methods. There has also been enormous growth in the introduction of tools that provide interactive interfaces for exploring these data dynamically. Most systems, however, do not support the realtime exploration of patterns and relationships across tools and do not provide guidance on which colors, colormaps or visual metaphors will be most effective. In this paper, we introduce a general architecture for sharing metadata between applications and a "Metadata Mapper" component that allows the analyst to decide how metadata from one component should be represented in another, guided by perceptual rules. This system is designed to support "brushing [1]," in which highlighting a region of interest in one application automatically highlights corresponding values in another, allowing the scientist to develop insights from multiple sources. Our work builds on the component-based iPlant Cyberinfrastructure [2] and provides a general approach to supporting interactive, exploration across independent visualization and visual analysis components.
Satagopam, Venkata; Gu, Wei; Eifes, Serge; Gawron, Piotr; Ostaszewski, Marek; Gebel, Stephan; Barbosa-Silva, Adriano; Balling, Rudi; Schneider, Reinhard
2016-01-01
Abstract Translational medicine is a domain turning results of basic life science research into new tools and methods in a clinical environment, for example, as new diagnostics or therapies. Nowadays, the process of translation is supported by large amounts of heterogeneous data ranging from medical data to a whole range of -omics data. It is not only a great opportunity but also a great challenge, as translational medicine big data is difficult to integrate and analyze, and requires the involvement of biomedical experts for the data processing. We show here that visualization and interoperable workflows, combining multiple complex steps, can address at least parts of the challenge. In this article, we present an integrated workflow for exploring, analysis, and interpretation of translational medicine data in the context of human health. Three Web services—tranSMART, a Galaxy Server, and a MINERVA platform—are combined into one big data pipeline. Native visualization capabilities enable the biomedical experts to get a comprehensive overview and control over separate steps of the workflow. The capabilities of tranSMART enable a flexible filtering of multidimensional integrated data sets to create subsets suitable for downstream processing. A Galaxy Server offers visually aided construction of analytical pipelines, with the use of existing or custom components. A MINERVA platform supports the exploration of health and disease-related mechanisms in a contextualized analytical visualization system. We demonstrate the utility of our workflow by illustrating its subsequent steps using an existing data set, for which we propose a filtering scheme, an analytical pipeline, and a corresponding visualization of analytical results. The workflow is available as a sandbox environment, where readers can work with the described setup themselves. Overall, our work shows how visualization and interfacing of big data processing services facilitate exploration, analysis, and interpretation of translational medicine data. PMID:27441714
Means and method of detection in chemical separation procedures
Yeung, Edward S.; Koutny, Lance B.; Hogan, Barry L.; Cheung, Chan K.; Ma, Yinfa
1993-03-09
A means and method for indirect detection of constituent components of a mixture separated in a chemical separation process. Fluorescing ions are distributed across the area in which separation of the mixture will occur to provide a generally uniform background fluorescence intensity. For example, the mixture is comprised of one or more charged analytes which displace fluorescing ions where its constituent components separate to. Fluorescing ions of the same charge as the charged analyte components cause a displacement. The displacement results in the location of the separated components having a reduced fluorescence intensity to the remainder of the background. Detection of the lower fluorescence intensity areas can be visually, by photographic means and methods, or by automated laser scanning.
Means and method of detection in chemical separation procedures
Yeung, E.S.; Koutny, L.B.; Hogan, B.L.; Cheung, C.K.; Yinfa Ma.
1993-03-09
A means and method are described for indirect detection of constituent components of a mixture separated in a chemical separation process. Fluorescing ions are distributed across the area in which separation of the mixture will occur to provide a generally uniform background fluorescence intensity. For example, the mixture is comprised of one or more charged analytes which displace fluorescing ions where its constituent components separate to. Fluorescing ions of the same charge as the charged analyte components cause a displacement. The displacement results in the location of the separated components having a reduced fluorescence intensity to the remainder of the background. Detection of the lower fluorescence intensity areas can be visually, by photographic means and methods, or by automated laser scanning.
Simultaneous chromatic and luminance human electroretinogram responses
Parry, Neil R A; Murray, Ian J; Panorgias, Athanasios; McKeefry, Declan J; Lee, Barry B; Kremers, Jan
2012-01-01
The parallel processing of information forms an important organisational principle of the primate visual system. Here we describe experiments which use a novel chromatic–achromatic temporal compound stimulus to simultaneously identify colour and luminance specific signals in the human electroretinogram (ERG). Luminance and chromatic components are separated in the stimulus; the luminance modulation has twice the temporal frequency of the chromatic modulation. ERGs were recorded from four trichromatic and two dichromatic subjects (1 deuteranope and 1 protanope). At isoluminance, the fundamental (first harmonic) response was elicited by the chromatic component in the stimulus. The trichromatic ERGs possessed low-pass temporal tuning characteristics, reflecting the activity of parvocellular post-receptoral mechanisms. There was very little first harmonic response in the dichromats’ ERGs. The second harmonic response was elicited by the luminance modulation in the compound stimulus and showed, in all subjects, band-pass temporal tuning characteristic of magnocellular activity. Thus it is possible to concurrently elicit ERG responses from the human retina which reflect processing in both chromatic and luminance pathways. As well as providing a clear demonstration of the parallel nature of chromatic and luminance processing in the human retina, the differences that exist between ERGs from trichromatic and dichromatic subjects point to the existence of interactions between afferent post-receptoral pathways that are in operation from the earliest stages of visual processing. PMID:22586211
ERIC Educational Resources Information Center
Bussey, Thomas J.
2013-01-01
Biochemistry education relies heavily on students' ability to visualize abstract cellular and molecular processes, mechanisms, and components. As such, biochemistry educators often turn to external representations to provide tangible, working models from which students' internal representations (mental models) can be constructed, evaluated, and…
Is the Binding of Visual Features in Working Memory Resource-Demanding?
ERIC Educational Resources Information Center
Allen, Richard J.; Baddeley, Alan D.; Hitch, Graham J.
2006-01-01
The episodic buffer component of working memory is assumed to play a role in the binding of features into chunks. A series of experiments compared memory for arrays of colors or shapes with memory for bound combinations of these features. Demanding concurrent verbal tasks were used to investigate the role of general attentional processes,…
ERIC Educational Resources Information Center
Bomba, Marie D.; Singhal, Anthony
2010-01-01
Previous dual-task research pairing complex visual tasks involving non-spatial cognitive processes during dichotic listening have shown effects on the late component (Ndl) of the negative difference selective attention waveform but no effects on the early (Nde) response suggesting that the Ndl, but not the Nde, is affected by non-spatial…
ERIC Educational Resources Information Center
Stockall, Linnaea; Stringfellow, Andrew; Marantz, Alec
2004-01-01
Visually presented letter strings consistently yield three MEG response components: the M170, associated with letter-string processing (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999); the M250, affected by phonotactic probability, (Pylkkanen, Stringfellow, & Marantz, 2002); and the M350, responsive to lexical frequency (Embick,…
Romero-Rivas, Carlos; Vera-Constán, Fátima; Rodríguez-Cuadrado, Sara; Puigcerver, Laura; Fernández-Prieto, Irune; Navarra, Jordi
2018-05-10
Musical melodies have "peaks" and "valleys". Although the vertical component of pitch and music is well-known, the mechanisms underlying its mental representation still remain elusive. We show evidence regarding the importance of previous experience with melodies for crossmodal interactions to emerge. The impact of these crossmodal interactions on other perceptual and attentional processes was also studied. Melodies including two tones with different frequency (e.g., E4 and D3) were repeatedly presented during the study. These melodies could either generate strong predictions (e.g., E4-D3-E4-D3-E4-[D3]) or not (e.g., E4-D3-E4-E4-D3-[?]). After the presentation of each melody, the participants had to judge the colour of a visual stimulus that appeared in a position that was, according to the traditional vertical connotations of pitch, either congruent (e.g., high-low-high-low-[up]), incongruent (high-low-high-low-[down]) or unpredicted with respect to the melody. Behavioural and electroencephalographic responses to the visual stimuli were obtained. Congruent visual stimuli elicited faster responses at the end of the experiment than at the beginning. Additionally, incongruent visual stimuli that broke the spatial prediction generated by the melody elicited larger P3b amplitudes (reflecting 'surprise' responses). Our results suggest that the passive (but repeated) exposure to melodies elicits spatial predictions that modulate the processing of other sensory events. Copyright © 2018 Elsevier Ltd. All rights reserved.
Visually cued motor synchronization: modulation of fMRI activation patterns by baseline condition.
Cerasa, Antonio; Hagberg, Gisela E; Bianciardi, Marta; Sabatini, Umberto
2005-01-03
A well-known issue in functional neuroimaging studies, regarding motor synchronization, is to design suitable control tasks able to discriminate between the brain structures involved in primary time-keeper functions and those related to other processes such as attentional effort. The aim of this work was to investigate how the predictability of stimulus onsets in the baseline condition modulates the activity in brain structures related to processes involved in time-keeper functions during the performance of a visually cued motor synchronization task (VM). The rational behind this choice derives from the notion that using different stimulus predictability can vary the subject's attention and the consequently neural activity. For this purpose, baseline levels of BOLD activity were obtained from 12 subjects during a conventional-baseline condition: maintained fixation of the visual rhythmic stimuli presented in the VM task, and a random-baseline condition: maintained fixation of visual stimuli occurring randomly. fMRI analysis demonstrated that while brain areas with a documented role in basic time processing are detected independent of the baseline condition (right cerebellum, bilateral putamen, left thalamus, left superior temporal gyrus, left sensorimotor cortex, left dorsal premotor cortex and supplementary motor area), the ventral premotor cortex, caudate nucleus, insula and inferior frontal gyrus exhibited a baseline-dependent activation. We conclude that maintained fixation of unpredictable visual stimuli can be employed in order to reduce or eliminate neural activity related to attentional components present in the synchronization task.
A neural marker of medical visual expertise: implications for training.
Rourke, Liam; Cruikshank, Leanna C; Shapke, Larissa; Singhal, Anthony
2016-12-01
Researchers have identified a component of the EEG that discriminates visual experts from novices. The marker indexes a comprehensive model of visual processing, and if it is apparent in physicians, it could be used to investigate the development and training of their visual expertise. The purpose of this study was to determine whether a neural marker of visual expertise-the enhanced N170 event-related potential-is apparent in the EEGs of physicians as they interpret diagnostic images. We conducted a controlled trial with 10 cardiologists and 9 pulmonologists. Each participant completed 520 trials of a standard visual processing task involving the rapid evaluation of EKGs and CXRs-indicating-lung-disease. Ostensibly, each participant is expert with one type of image and competent with the other. We collected behavioral data on the participants' expertise with EKGs and CXRs and electrophysiological data on the magnitude, latency, and scalp location of their N170 ERPs as they interpreted the two types of images. Cardiologists demonstrated significantly more expertise with EKGs than CXRs, and this was reflected in an increased amplitude of their N170 ERPs while reading EKGs compared to CXRs. Pulmonologists demonstrated equal expertise with both types of images, and this was reflected in equal N170 ERP amplitudes for EKGs and CXRs. The results suggest provisionally that visual expertise has a similar substrate in medical practice as it does in other domains that have been studied extensively. This provides support for applying a sophisticated body of literature to questions about training and assessment of visual expertise among physicians.
Theoretical approaches to lightness and perception.
Gilchrist, Alan
2015-01-01
Theories of lightness, like theories of perception in general, can be categorized as high-level, low-level, and mid-level. However, I will argue that in practice there are only two categories: one-stage mid-level theories, and two-stage low-high theories. Low-level theories usually include a high-level component and high-level theories include a low-level component, the distinction being mainly one of emphasis. Two-stage theories are the modern incarnation of the persistent sensation/perception dichotomy according to which an early experience of raw sensations, faithful to the proximal stimulus, is followed by a process of cognitive interpretation, typically based on past experience. Like phlogiston or the ether, raw sensations seem like they must exist, but there is no clear evidence for them. Proximal stimulus matches are postperceptual, not read off an early sensory stage. Visual angle matches are achieved by a cognitive process of flattening the visual world. Likewise, brightness (luminance) matches depend on a cognitive process of flattening the illumination. Brightness is not the input to lightness; brightness is slower than lightness. Evidence for an early (< 200 ms) mosaic stage is shaky. As for cognitive influences on perception, the many claims tend to fall apart upon close inspection of the evidence. Much of the evidence for the current revival of the 'new look' is probably better explained by (1) a natural desire of (some) subjects to please the experimenter, and (2) the ease of intuiting an experimental hypothesis. High-level theories of lightness are overkill. The visual system does not need to know the amount of illumination, merely which surfaces share the same illumination. This leaves mid-level theories derived from the gestalt school. Here the debate seems to revolve around layer models and framework models. Layer models fit our visual experience of a pattern of illumination projected onto a pattern of reflectance, while framework models provide a better account of illusions and failures of constancy. Evidence for and against these approaches is reviewed.
Short-term visual deprivation can enhance spatial release from masking.
Pagé, Sara; Sharp, Andréanne; Landry, Simon P; Champoux, François
2016-08-15
This research aims to study the effect of short-term visual deprivation on spatial release from masking, a major component of the cocktail party effect that allows people to detect an auditory target in noise. The Masking Level Difference (MLD) test was administered on healthy individuals over three sessions: before (I) and after 90min of visual deprivation (II), and after 90min of re-exposure to light (III). A non-deprived control group performed the same tests, but remained sighted between sessions I and II. The non-deprived control group displayed constant results across sessions. However, performance in the MLD test was improved following short-term visual deprivation and performance returned to pre-deprivation values after light re-exposure. This study finds that short-term visual deprivation transiently enhances the spatial release from masking. These data suggest the significant potential for enhancing a process involved in the cocktail party effect in normally developing individuals and adds to an emerging literature on the potential to enhance auditory ability after only a brief period of visual deprivation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A visualization system for CT based pulmonary fissure analysis
NASA Astrophysics Data System (ADS)
Pu, Jiantao; Zheng, Bin; Park, Sang Cheol
2009-02-01
In this study we describe a visualization system of pulmonary fissures depicted on CT images. The purpose is to provide clinicians with an intuitive perception of a patient's lung anatomy through an interactive examination of fissures, enhancing their understanding and accurate diagnosis of lung diseases. This system consists of four key components: (1) region-of-interest segmentation; (2) three-dimensional surface modeling; (3) fissure type classification; and (4) an interactive user interface, by which the extracted fissures are displayed flexibly in different space domains including image space, geometric space, and mixed space using simple toggling "on" and "off" operations. In this system, the different visualization modes allow users not only to examine the fissures themselves but also to analyze the relationship between fissures and their surrounding structures. In addition, the users can adjust thresholds interactively to visualize the fissure surface under different scanning and processing conditions. Such a visualization tool is expected to facilitate investigation of structures near the fissures and provide an efficient "visual aid" for other applications such as treatment planning and assessment of therapeutic efficacy as well as education of medical professionals.
NASA Astrophysics Data System (ADS)
Ekenes, K.
2017-12-01
This presentation will outline the process of creating a web application for exploring large amounts of scientific geospatial data using modern automated cartographic techniques. Traditional cartographic methods, including data classification, may inadvertently hide geospatial and statistical patterns in the underlying data. This presentation demonstrates how to use smart web APIs that quickly analyze the data when it loads, and provides suggestions for the most appropriate visualizations based on the statistics of the data. Since there are just a few ways to visualize any given dataset well, it is imperative to provide smart default color schemes tailored to the dataset as opposed to static defaults. Since many users don't go beyond default values, it is imperative that they are provided with smart default visualizations. Multiple functions for automating visualizations are available in the Smart APIs, along with UI elements allowing users to create more than one visualization for a dataset since there isn't a single best way to visualize a given dataset. Since bivariate and multivariate visualizations are particularly difficult to create effectively, this automated approach removes the guesswork out of the process and provides a number of ways to generate multivariate visualizations for the same variables. This allows the user to choose which visualization is most appropriate for their presentation. The methods used in these APIs and the renderers generated by them are not available elsewhere. The presentation will show how statistics can be used as the basis for automating default visualizations of data along continuous ramps, creating more refined visualizations while revealing the spread and outliers of the data. Adding interactive components to instantaneously alter visualizations allows users to unearth spatial patterns previously unknown among one or more variables. These applications may focus on a single dataset that is frequently updated, or configurable for a variety of datasets from multiple sources.
Basic Instinct Undressed: Early Spatiotemporal Processing for Primary Sexual Characteristics
Legrand, Lore B.; Del Zotto, Marzia; Tyrand, Rémi; Pegna, Alan J.
2013-01-01
This study investigates the spatiotemporal dynamics associated with conscious and non-conscious processing of naked and dressed human bodies. To this effect, stimuli of naked men and women with visible primary sexual characteristics, as well as dressed bodies, were presented to 20 heterosexual male and female participants while acquiring high resolution EEG data. The stimuli were either consciously detectable (supraliminal presentations) or were rendered non-conscious through backward masking (subliminal presentations). The N1 event-related potential component was significantly enhanced in participants when they viewed naked compared to dressed bodies under supraliminal viewing conditions. More importantly, naked bodies of the opposite sex produced a significantly greater N1 component compared to dressed bodies during subliminal presentations, when participants were not aware of the stimulus presented. A source localization algorithm computed on the N1 showed that the response for naked bodies in the supraliminal viewing condition was stronger in body processing areas, primary visual areas and additional structures related to emotion processing. By contrast, in the subliminal viewing condition, only visual and body processing areas were found to be activated. These results suggest that naked bodies and primary sexual characteristics are processed early in time (i.e., <200 ms) and activate key brain structures even when they are not consciously detected. It appears that, similarly to what has been reported for emotional faces, sexual features benefit from automatic and rapid processing, most likely due to their high relevance for the individual and their importance for the species in terms of reproductive success. PMID:23894532
NASA Astrophysics Data System (ADS)
Schiltz, Holly Kristine
Visualization skills are important in learning chemistry, as these skills have been shown to correlate to high ability in problem solving. Students' understanding of visual information and their problem-solving processes may only ever be accessed indirectly: verbalization, gestures, drawings, etc. In this research, deconstruction of complex visual concepts was aligned with the promotion of students' verbalization of visualized ideas to teach students to solve complex visual tasks independently. All instructional tools and teaching methods were developed in accordance with the principles of the theoretical framework, the Modeling Theory of Learning: deconstruction of visual representations into model components, comparisons to reality, and recognition of students' their problemsolving strategies. Three physical model systems were designed to provide students with visual and tangible representations of chemical concepts. The Permanent Reflection Plane Demonstration provided visual indicators that students used to support or invalidate the presence of a reflection plane. The 3-D Coordinate Axis system provided an environment that allowed students to visualize and physically enact symmetry operations in a relevant molecular context. The Proper Rotation Axis system was designed to provide a physical and visual frame of reference to showcase multiple symmetry elements that students must identify in a molecular model. Focus groups of students taking Inorganic chemistry working with the physical model systems demonstrated difficulty documenting and verbalizing processes and descriptions of visual concepts. Frequently asked student questions were classified, but students also interacted with visual information through gestures and model manipulations. In an effort to characterize how much students used visualization during lecture or recitation, we developed observation rubrics to gather information about students' visualization artifacts and examined the effect instructors' modeled visualization artifacts had on students. No patterns emerged from the passive observation of visualization artifacts in lecture or recitation, but the need to elicit visual information from students was made clear. Deconstruction proved to be a valuable method for instruction and assessment of visual information. Three strategies for using deconstruction in teaching were distilled from the lessons and observations of the student focus groups: begin with observations of what is given in an image and what it's composed of, identify the relationships between components to find additional operations in different environments about the molecule, and deconstructing steps of challenging questions can reveal mistakes. An intervention was developed to teach students to use deconstruction and verbalization to analyze complex visualization tasks and employ the principles of the theoretical framework. The activities were scaffolded to introduce increasingly challenging concepts to students, but also support them as they learned visually demanding chemistry concepts. Several themes were observed in the analysis of the visualization activities. Students used deconstruction by documenting which parts of the images were useful for interpretation of the visual. Students identified valid patterns and rules within the images, which signified understanding of arrangement of information presented in the representation. Successful strategy communication was identified when students documented personal strategies that allowed them to complete the activity tasks. Finally, students demonstrated the ability to extend symmetry skills to advanced applications they had not previously seen. This work shows how the use of deconstruction and verbalization may have a great impact on how students master difficult topics and combined, they offer students a powerful strategy to approach visually demanding chemistry problems and to the instructor a unique insight to mentally constructed strategies.
Makeig, S; Westerfield, M; Jung, T P; Covington, J; Townsend, J; Sejnowski, T J; Courchesne, E
1999-04-01
Human event-related potentials (ERPs) were recorded from 10 subjects presented with visual target and nontarget stimuli at five screen locations and responding to targets presented at one of the locations. The late positive response complexes of 25-75 ERP average waveforms from the two task conditions were simultaneously analyzed with Independent Component Analysis, a new computational method for blindly separating linearly mixed signals. Three spatially fixed, temporally independent, behaviorally relevant, and physiologically plausible components were identified without reference to peaks in single-channel waveforms. A novel frontoparietal component (P3f) began at approximately 140 msec and peaked, in faster responders, at the onset of the motor command. The scalp distribution of P3f appeared consistent with brain regions activated during spatial orienting in functional imaging experiments. A longer-latency large component (P3b), positive over parietal cortex, was followed by a postmotor potential (Pmp) component that peaked 200 msec after the button press and reversed polarity near the central sulcus. A fourth component associated with a left frontocentral nontarget positivity (Pnt) was evoked primarily by target-like distractors presented in the attended location. When no distractors were presented, responses of five faster-responding subjects contained largest P3f and smallest Pmp components; when distractors were included, a Pmp component appeared only in responses of the five slower-responding subjects. Direct relationships between component amplitudes, latencies, and behavioral responses, plus similarities between component scalp distributions and regional activations reported in functional brain imaging experiments suggest that P3f, Pmp, and Pnt measure the time course and strength of functionally distinct brain processes.
Dual-Layer Video Encryption using RSA Algorithm
NASA Astrophysics Data System (ADS)
Chadha, Aman; Mallik, Sushmit; Chadha, Ankit; Johar, Ravdeep; Mani Roja, M.
2015-04-01
This paper proposes a video encryption algorithm using RSA and Pseudo Noise (PN) sequence, aimed at applications requiring sensitive video information transfers. The system is primarily designed to work with files encoded using the Audio Video Interleaved (AVI) codec, although it can be easily ported for use with Moving Picture Experts Group (MPEG) encoded files. The audio and video components of the source separately undergo two layers of encryption to ensure a reasonable level of security. Encryption of the video component involves applying the RSA algorithm followed by the PN-based encryption. Similarly, the audio component is first encrypted using PN and further subjected to encryption using the Discrete Cosine Transform. Combining these techniques, an efficient system, invulnerable to security breaches and attacks with favorable values of parameters such as encryption/decryption speed, encryption/decryption ratio and visual degradation; has been put forth. For applications requiring encryption of sensitive data wherein stringent security requirements are of prime concern, the system is found to yield negligible similarities in visual perception between the original and the encrypted video sequence. For applications wherein visual similarity is not of major concern, we limit the encryption task to a single level of encryption which is accomplished by using RSA, thereby quickening the encryption process. Although some similarity between the original and encrypted video is observed in this case, it is not enough to comprehend the happenings in the video.
Neural Dynamics of Audiovisual Synchrony and Asynchrony Perception in 6-Month-Old Infants
Kopp, Franziska; Dietrich, Claudia
2013-01-01
Young infants are sensitive to multisensory temporal synchrony relations, but the neural dynamics of temporal interactions between vision and audition in infancy are not well understood. We investigated audiovisual synchrony and asynchrony perception in 6-month-old infants using event-related brain potentials (ERP). In a prior behavioral experiment (n = 45), infants were habituated to an audiovisual synchronous stimulus and tested for recovery of interest by presenting an asynchronous test stimulus in which the visual stream was delayed with respect to the auditory stream by 400 ms. Infants who behaviorally discriminated the change in temporal alignment were included in further analyses. In the EEG experiment (final sample: n = 15), synchronous and asynchronous stimuli (visual delay of 400 ms) were presented in random order. Results show latency shifts in the auditory ERP components N1 and P2 as well as the infant ERP component Nc. Latencies in the asynchronous condition were significantly longer than in the synchronous condition. After video onset but preceding the auditory onset, amplitude modulations propagating from posterior to anterior sites and related to the Pb component of infants’ ERP were observed. Results suggest temporal interactions between the two modalities. Specifically, they point to the significance of anticipatory visual motion for auditory processing, and indicate young infants’ predictive capacities for audiovisual temporal synchrony relations. PMID:23346071
Functional modular architecture underlying attentional control in aging.
Monge, Zachary A; Geib, Benjamin R; Siciliano, Rachel E; Packard, Lauren E; Tallman, Catherine W; Madden, David J
2017-07-15
Previous research suggests that age-related differences in attention reflect the interaction of top-down and bottom-up processes, but the cognitive and neural mechanisms underlying this interaction remain an active area of research. Here, within a sample of community-dwelling adults 19-78 years of age, we used diffusion reaction time (RT) modeling and multivariate functional connectivity to investigate the behavioral components and whole-brain functional networks, respectively, underlying bottom-up and top-down attentional processes during conjunction visual search. During functional MRI scanning, participants completed a conjunction visual search task in which each display contained one item that was larger than the other items (i.e., a size singleton) but was not informative regarding target identity. This design allowed us to examine in the RT components and functional network measures the influence of (a) additional bottom-up guidance when the target served as the size singleton, relative to when the distractor served as the size singleton (i.e., size singleton effect) and (b) top-down processes during target detection (i.e., target detection effect; target present vs. absent trials). We found that the size singleton effect (i.e., increased bottom-up guidance) was associated with RT components related to decision and nondecision processes, but these effects did not vary with age. Also, a modularity analysis revealed that frontoparietal module connectivity was important for both the size singleton and target detection effects, but this module became central to the networks through different mechanisms for each effect. Lastly, participants 42 years of age and older, in service of the target detection effect, relied more on between-frontoparietal module connections. Our results further elucidate mechanisms through which frontoparietal regions support attentional control and how these mechanisms vary in relation to adult age. Copyright © 2017 Elsevier Inc. All rights reserved.
Effects of aging on perception of motion
NASA Astrophysics Data System (ADS)
Kaur, Manpreet; Wilder, Joseph; Hung, George; Julesz, Bela
1997-09-01
Driving requires two basic visual components: 'visual sensory function' and 'higher order skills.' Among the elderly, it has been observed that when attention must be divided in the presence of multiple objects, their attentional skills and relational processes, along with impairment of basic visual sensory function, are markedly impaired. A high frame rate imaging system was developed to assess the elderly driver's ability to locate and distinguish computer generated images of vehicles and to determine their direction of motion in a simulated intersection. Preliminary experiments were performed at varying target speeds and angular displacements to study the effect of these parameters on motion perception. Results for subjects in four different age groups, ranging from mid- twenties to mid-sixties, show significantly better performance for the younger subjects as compared to the older ones.
A description of discrete internal representation schemes for visual pattern discrimination.
Foster, D H
1980-01-01
A general description of a class of schemes for pattern vision is outlined in which the visual system is assumed to form a discrete internal representation of the stimulus. These representations are discrete in that they are considered to comprise finite combinations of "components" which are selected from a fixed and finite repertoire, and which designate certain simple pattern properties or features. In the proposed description it is supposed that the construction of an internal representation is a probabilistic process. A relationship is then formulated associating the probability density functions governing this construction and performance in visually discriminating patterns when differences in pattern shape are small. Some questions related to the application of this relationship to the experimental investigation of discrete internal representations are briefly discussed.
Differential priming effects of color-opponent subliminal stimulation on visual magnetic responses.
Hoshiyama, Minoru; Kakigi, Ryusuke; Takeshima, Yasuyuki; Miki, Kensaku; Watanabe, Shoko
2006-10-01
We investigated the effects of subliminal stimulation on visible stimulation to demonstrate the priority of facial discrimination processing, using a unique, indiscernible, color-opponent subliminal (COS) stimulation. We recorded event-related magnetic cortical fields (ERF) by magnetoencephalography (MEG) after the presentation of a face or flower stimulus with COS conditioning using a face, flower, random pattern, and blank. The COS stimulation enhanced the response to visible stimulation when the figure in the COS stimulation was identical to the target visible stimulus, but more so for the face than for the flower stimulus. The ERF component modulated by the COS stimulation was estimated to be located in the ventral temporal cortex. We speculated that the enhancement was caused by an interaction of the responses after subthreshold stimulation by the COS stimulation and the suprathreshold stimulation after target stimulation, such as in the processing for categorization or discrimination. We also speculated that the face was processed with priority at the level of the ventral temporal cortex during visual processing outside of consciousness.
NASA Astrophysics Data System (ADS)
Cytrynowicz, Debra G.
The research project itself was the initiation of the development of a planar miniature loop heat pipe based on a capillary wick structure made of coherent porous silicon. Work on this project fell into four main categories, which were component fabrication, test system construction, characterization testing and test data collection, performance analysis and thermal modeling. Component fabrication involved the production of various components for the evaporator. When applicable, these components were to be produced by microelectronic and MEMS or microelectromechanical fabrication techniques. Required work involved analyses and, where necessary, modifications to the wafer processing sequence, the photo-electrochemical etching process, system and controlling computer program to make it more reliable, flexible and efficient. The development of more than one wick production process was also extremely necessary in the event of equipment failure. Work on developing this alternative also involved investigations into various details of the photo-electrochemical etching process itself. Test system construction involved the actual assembly of open and closed loop test systems. Characterization involved developing and administering a series of tests to evaluate the performance of the wicks and test systems. Although there were some indications that the devices were operating according to loop heat pipe theory, they were transient and unstable. Performance analysis involved the construction of a transparent evaporator, which enabled the visual observation of the phenomena, which occurred in the evaporator during operation. It also involved investigating the effect of the quartz wool secondary wick on the operation of the device. Observations made during the visualization study indicated that the capillary and boiling limits were being reached at extremely low values of input power. The work was performed in a collaborative effort between the Biomedical Nanotechnology Research Laboratory at the University of Toledo, the Center for Microelectronics and Sensors and MEMS at the University of Cincinnati and the Thermo-Mechanical Systems Branch of the Power and On-Board Propulsion Division at the John H. Glenn Research Center of the National Aeronautics and Space Administration in Cleveland, Ohio. Work on the project produced six publications, which presented various details on component fabrication, tests system construction and characterization and thermal modeling.
Exploring eye movements in patients with glaucoma when viewing a driving scene.
Crabb, David P; Smith, Nicholas D; Rauscher, Franziska G; Chisholm, Catharine M; Barbur, John L; Edgar, David F; Garway-Heath, David F
2010-03-16
Glaucoma is a progressive eye disease and a leading cause of visual disability. Automated assessment of the visual field determines the different stages in the disease process: it would be desirable to link these measurements taken in the clinic with patient's actual function, or establish if patients compensate for their restricted field of view when performing everyday tasks. Hence, this study investigated eye movements in glaucomatous patients when viewing driving scenes in a hazard perception test (HPT). The HPT is a component of the UK driving licence test consisting of a series of short film clips of various traffic scenes viewed from the driver's perspective each containing hazardous situations that require the camera car to change direction or slow down. Data from nine glaucomatous patients with binocular visual field defects and ten age-matched control subjects were considered (all experienced drivers). Each subject viewed 26 different films with eye movements simultaneously monitored by an eye tracker. Computer software was purpose written to pre-process the data, co-register it to the film clips and to quantify eye movements and point-of-regard (using a dynamic bivariate contour ellipse analysis). On average, and across all HPT films, patients exhibited different eye movement characteristics to controls making, for example, significantly more saccades (P<0.001; 95% confidence interval for mean increase: 9.2 to 22.4%). Whilst the average region of 'point-of-regard' of the patients did not differ significantly from the controls, there were revealing cases where patients failed to see a hazard in relation to their binocular visual field defect. Characteristics of eye movement patterns in patients with bilateral glaucoma can differ significantly from age-matched controls when viewing a traffic scene. Further studies of eye movements made by glaucomatous patients could provide useful information about the definition of the visual field component required for fitness to drive.
Exploring Eye Movements in Patients with Glaucoma When Viewing a Driving Scene
Crabb, David P.; Smith, Nicholas D.; Rauscher, Franziska G.; Chisholm, Catharine M.; Barbur, John L.; Edgar, David F.; Garway-Heath, David F.
2010-01-01
Background Glaucoma is a progressive eye disease and a leading cause of visual disability. Automated assessment of the visual field determines the different stages in the disease process: it would be desirable to link these measurements taken in the clinic with patient's actual function, or establish if patients compensate for their restricted field of view when performing everyday tasks. Hence, this study investigated eye movements in glaucomatous patients when viewing driving scenes in a hazard perception test (HPT). Methodology/Principal Findings The HPT is a component of the UK driving licence test consisting of a series of short film clips of various traffic scenes viewed from the driver's perspective each containing hazardous situations that require the camera car to change direction or slow down. Data from nine glaucomatous patients with binocular visual field defects and ten age-matched control subjects were considered (all experienced drivers). Each subject viewed 26 different films with eye movements simultaneously monitored by an eye tracker. Computer software was purpose written to pre-process the data, co-register it to the film clips and to quantify eye movements and point-of-regard (using a dynamic bivariate contour ellipse analysis). On average, and across all HPT films, patients exhibited different eye movement characteristics to controls making, for example, significantly more saccades (P<0.001; 95% confidence interval for mean increase: 9.2 to 22.4%). Whilst the average region of ‘point-of-regard’ of the patients did not differ significantly from the controls, there were revealing cases where patients failed to see a hazard in relation to their binocular visual field defect. Conclusions/Significance Characteristics of eye movement patterns in patients with bilateral glaucoma can differ significantly from age-matched controls when viewing a traffic scene. Further studies of eye movements made by glaucomatous patients could provide useful information about the definition of the visual field component required for fitness to drive. PMID:20300522
Modeling of Explorative Procedures for Remote Object Identification
1991-09-01
haptic sensory system and the simulated foveal component of the visual system. Eventually it will allow multiple applications in remote sensing and...superposition of sensory channels. The use of a force reflecting telemanipulator and computer simulated visual foveal component are the tools which...representation of human search models is achieved by using the proprioceptive component of the haptic sensory system and the simulated foveal component of the
Fleischhauer, Monika; Strobel, Alexander; Diers, Kersten; Enge, Sören
2014-02-01
The Implicit Association Test (IAT) is a widely used latency-based categorization task that indirectly measures the strength of automatic associations between target and attribute concepts. So far, little is known about the perceptual and cognitive processes underlying personality IATs. Thus, the present study examined event-related potential indices during the execution of an IAT measuring neuroticism (N = 70). The IAT effect was strongly modulated by the P1 component indicating early facilitation of relevant visual input and by a P3b-like late positive component reflecting the efficacy of stimulus categorization. Both components covaried, and larger amplitudes led to faster responses. The results suggest a relationship between early perceptual and semantic processes operating at a more automatic, implicit level and later decision-related categorization of self-relevant stimuli contributing to the IAT effect. Copyright © 2013 Society for Psychophysiological Research.
An elementary theory of eclipsing depths of the light curve and its application to Beta Lyrae
NASA Technical Reports Server (NTRS)
Huang, S.-S.; Brown, D. A.
1976-01-01
An elementary theory of the ratio of depths of secondary and primary eclipses of a light curve has been proposed for studying the nature of component stars. It has been applied to light curves of Beta Lyrae in the visual, blue, and far-ultraviolet regions with the purpose of investigating the energy sources for the luminosity of the disk surrounding the secondary component and determining the dominant radiative process in the disk. No trace of the spectrum of primary radiation has been found in the disk. Therefore, it is suggested that LTE is the main radiative process in the disk, which radiates at a temperature of approximately 12,000 K in the portion that undergoes eclipse. A small source corresponding to 14,500 K has also been tentatively detected and may represent a hot spot caused by hydrodynamic flow of matter from the primary component to the disk.
The Overgrid Interface for Computational Simulations on Overset Grids
NASA Technical Reports Server (NTRS)
Chan, William M.; Kwak, Dochan (Technical Monitor)
2002-01-01
Computational simulations using overset grids typically involve multiple steps and a variety of software modules. A graphical interface called OVERGRID has been specially designed for such purposes. Data required and created by the different steps include geometry, grids, domain connectivity information and flow solver input parameters. The interface provides a unified environment for the visualization, processing, generation and diagnosis of such data. General modules are available for the manipulation of structured grids and unstructured surface triangulations. Modules more specific for the overset approach include surface curve generators, hyperbolic and algebraic surface grid generators, a hyperbolic volume grid generator, Cartesian box grid generators, and domain connectivity: pre-processing tools. An interface provides automatic selection and viewing of flow solver boundary conditions, and various other flow solver inputs. For problems involving multiple components in relative motion, a module is available to build the component/grid relationships and to prescribe and animate the dynamics of the different components.
Impact of language on development of auditory-visual speech perception.
Sekiyama, Kaoru; Burnham, Denis
2008-03-01
The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various signal-to-noise levels. In Experiment 1 with two groups of adults, native speakers of Japanese and native speakers of English, the results on both percent visually influenced responses and reaction time supported previous reports of a weaker visual influence for Japanese participants. In Experiment 2, an additional three age groups (6, 8, and 11 years) in each language group were tested. The results showed that the degree of visual influence was low and equivalent for Japanese and English language 6-year-olds, and increased over age for English language participants, especially between 6 and 8 years, but remained the same for Japanese participants. This may be related to the fact that English language adults and older children processed visual speech information relatively faster than auditory information whereas no such inter-modal differences were found in the Japanese participants' reaction times.
Takeuchi, Tatsuto; Yoshimoto, Sanae; Shimada, Yasuhiro; Kochiyama, Takanori; Kondo, Hirohito M
2017-02-19
Recent studies have shown that interindividual variability can be a rich source of information regarding the mechanism of human visual perception. In this study, we examined the mechanisms underlying interindividual variability in the perception of visual motion, one of the fundamental components of visual scene analysis, by measuring neurotransmitter concentrations using magnetic resonance spectroscopy. First, by psychophysically examining two types of motion phenomena-motion assimilation and contrast-we found that, following the presentation of the same stimulus, some participants perceived motion assimilation, while others perceived motion contrast. Furthermore, we found that the concentration of the excitatory neurotransmitter glutamate-glutamine (Glx) in the dorsolateral prefrontal cortex (Brodmann area 46) was positively correlated with the participant's tendency to motion assimilation over motion contrast; however, this effect was not observed in the visual areas. The concentration of the inhibitory neurotransmitter γ-aminobutyric acid had only a weak effect compared with that of Glx. We conclude that excitatory process in the suprasensory area is important for an individual's tendency to determine antagonistically perceived visual motion phenomena.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).
The Influence of Alertness on Spatial and Nonspatial Components of Visual Attention
ERIC Educational Resources Information Center
Matthias, Ellen; Bublak, Peter; Muller, Hermann J.; Schneider, Werner X.; Krummenacher, Joseph; Finke, Kathrin
2010-01-01
Three experiments investigated whether spatial and nonspatial components of visual attention would be influenced by changes in (healthy, young) subjects' level of alertness and whether such effects on separable components would occur independently of each other. The experiments used a no-cue/alerting-cue design with varying cue-target stimulus…
Visual Basic Applications to Physics Teaching
ERIC Educational Resources Information Center
Chitu, Catalin; Inpuscatu, Razvan Constantin; Viziru, Marilena
2011-01-01
Derived from basic language, VB (Visual Basic) is a programming language focused on the video interface component. With graphics and functional components implemented, the programmer is able to bring and use their components to achieve the desired application in a relatively short time. Language VB is a useful tool in physics teaching by creating…
NASA Technical Reports Server (NTRS)
Knuth, Kevin H.; Shah, Ankoor S.; Truccolo, Wilson; Ding, Ming-Zhou; Bressler, Steven L.; Schroeder, Charles E.
2003-01-01
Electric potentials and magnetic fields generated by ensembles of synchronously active neurons in response to external stimuli provide information essential to understanding the processes underlying cognitive and sensorimotor activity. Interpreting recordings of these potentials and fields is difficult as each detector records signals simultaneously generated by various regions throughout the brain. We introduce the differentially Variable Component Analysis (dVCA) algorithm, which relies on trial-to-trial variability in response amplitude and latency to identify multiple components. Using simulations we evaluate the importance of response variability to component identification, the robustness of dVCA to noise, and its ability to characterize single-trial data. Finally, we evaluate the technique using visually evoked field potentials recorded at incremental depths across the layers of cortical area VI, in an awake, behaving macaque monkey.
Multi-focus image fusion algorithm using NSCT and MPCNN
NASA Astrophysics Data System (ADS)
Liu, Kang; Wang, Lianli
2018-04-01
Based on nonsubsampled contourlet transform (NSCT) and modified pulse coupled neural network (MPCNN), the paper proposes an effective method of image fusion. Firstly, the paper decomposes the source image into the low-frequency components and high-frequency components using NSCT, and then processes the low-frequency components by regional statistical fusion rules. For high-frequency components, the paper calculates the spatial frequency (SF), which is input into MPCNN model to get relevant coefficients according to the fire-mapping image of MPCNN. At last, the paper restructures the final image by inverse transformation of low-frequency and high-frequency components. Compared with the wavelet transformation (WT) and the traditional NSCT algorithm, experimental results indicate that the method proposed in this paper achieves an improvement both in human visual perception and objective evaluation. It indicates that the method is effective, practical and good performance.
Age, gesture span, and dissociations among component subsystems of working memory.
Dolman, R; Roy, E A; Dimeck, P T; Hall, C R
2000-01-01
Working memory was examined in old and young adults using a series of span tasks, including the forward versions of the visual-spatial and digit span tasks from the Wechsler Memory Scale-Revised, and comparable hand gesture and visual design span tasks. The observation that the young participants performed significantly better on all the tasks except digit span suggested that aging has an impact on some component subsystems of working memory but not others. Analyses of intercorrelations in span performance supports the dissociation among three component subsystems, one for auditory verbal information (the articulatory loop), one for visual-spatial information (visual-spatial scratch-pad), and one for hand/body postural configuration.
Contingency Analysis Post-Processing With Advanced Computing and Visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Glaesemann, Kurt; Fitzhenry, Erin
Contingency analysis is a critical function widely used in energy management systems to assess the impact of power system component failures. Its outputs are important for power system operation for improved situational awareness, power system planning studies, and power market operations. With the increased complexity of power system modeling and simulation caused by increased energy production and demand, the penetration of renewable energy and fast deployment of smart grid devices, and the trend of operating grids closer to their capacity for better efficiency, more and more contingencies must be executed and analyzed quickly in order to ensure grid reliability andmore » accuracy for the power market. Currently, many researchers have proposed different techniques to accelerate the computational speed of contingency analysis, but not much work has been published on how to post-process the large amount of contingency outputs quickly. This paper proposes a parallel post-processing function that can analyze contingency analysis outputs faster and display them in a web-based visualization tool to help power engineers improve their work efficiency by fast information digestion. Case studies using an ESCA-60 bus system and a WECC planning system are presented to demonstrate the functionality of the parallel post-processing technique and the web-based visualization tool.« less
Moll, Kristina; Göbel, Silke M; Snowling, Margaret J
2015-01-01
As well as being the hallmark of mathematics disorders, deficits in number processing have also been reported for individuals with reading disorders. The aim of the present study was to investigate separately the components of numerical processing affected in reading and mathematical disorders within the framework of the Triple Code Model. Children with reading disorders (RD), mathematics disorders (MD), comorbid deficits (RD + MD), and typically developing children (TD) were tested on verbal, visual-verbal, and nonverbal number tasks. As expected, children with MD were impaired across a broad range of numerical tasks. In contrast, children with RD were impaired in (visual-)verbal number tasks but showed age-appropriate performance in nonverbal number skills, suggesting their impairments were domain specific and related to their reading difficulties. The comorbid group showed an additive profile of the impairments of the two single-deficit groups. Performance in speeded verbal number tasks was related to rapid automatized naming, a measure of visual-verbal access in the RD but not in the MD group. The results indicate that deficits in number skills are due to different underlying cognitive deficits in children with RD compared to children with MD: a phonological deficit in RD and a deficit in processing numerosities in MD.
Rallis, Austin; Fercho, Kelene A; Bosch, Taylor J; Baugh, Lee A
2018-01-31
Tool use is associated with three visual streams-dorso-dorsal, ventro-dorsal, and ventral visual streams. These streams are involved in processing online motor planning, action semantics, and tool semantics features, respectively. Little is known about the way in which the brain represents virtual tools. To directly assess this question, a virtual tool paradigm was created that provided the ability to manipulate tool components in isolation of one another. During functional magnetic resonance imaging (fMRI), adult participants performed a series of virtual tool manipulation tasks in which vision and movement kinematics of the tool were manipulated. Reaction time and hand movement direction were monitored while the tasks were performed. Functional imaging revealed that activity within all three visual streams was present, in a similar pattern to what would be expected with physical tool use. However, a previously unreported network of right-hemisphere activity was found including right inferior parietal lobule, middle and superior temporal gyri and supramarginal gyrus - regions well known to be associated with tool processing within the left hemisphere. These results provide evidence that both virtual and physical tools are processed within the same brain regions, though virtual tools recruit bilateral tool processing regions to a greater extent than physical tools. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dickson, Danielle S; Federmeier, Kara D
2014-11-01
Differences in how the right and left hemispheres (RH, LH) apprehend visual words were examined using event-related potentials (ERPs) in a repetition paradigm with visual half-field (VF) presentation. In both hemispheres (RH/LVF, LH/RVF), initial presentation of items elicited similar and typical effects of orthographic neighborhood size, with larger N400s for orthographically regular items (words and pseudowords) than for irregular items (acronyms and meaningless illegal strings). However, hemispheric differences emerged on repetition effects. When items were repeated in the LH/RVF, orthographically regular items, relative to irregular items, elicited larger repetition effects on both the N250, a component reflecting processing at the level of visual form (orthography), and on the N400, which has been linked to semantic access. In contrast, in the RH/LVF, repetition effects were biased toward irregular items on the N250 and were similar in size across item types for the N400. The results suggest that processing in the LH is more strongly affected by wordform regularity than in the RH, either due to enhanced processing of familiar orthographic patterns or due to the fact that regular forms can be more readily mapped onto phonology. Copyright © 2014 Elsevier Ltd. All rights reserved.
High-speed visualization of fuel spray impingement in the near-wall region using a DISI injector
NASA Astrophysics Data System (ADS)
Kawahara, N.; Kintaka, K.; Tomita, E.
2017-02-01
We used a multi-hole injector to spray isooctane under atmospheric conditions and observed droplet impingement behaviors. It is generally known that droplet impact regimes such as splashing, deposition, or bouncing are governed by the Weber number. However, owing to its complexity, little has been reported on microscopic visualization of poly-dispersed spray. During the spray impingement process, a large number of droplets approach, hit, then interact with the wall. It is therefore difficult to focus on a single droplet and observe the impingement process. We solved this difficulty using high-speed microscopic visualization. The spray/wall interaction processes were recorded by a high-speed camera (Shimadzu HPV-X2) with a long-distance microscope. We captured several impinging microscopic droplets. After optimizing the magnification and frame rate, the atomization behaviors, splashing and deposition, were recorded. Then, we processed the images obtained to determine droplet parameters such as the diameter, velocity, and impingement angle. Based on this information, the critical threshold between splashing and deposition was investigated in terms of the normal and parallel components of the Weber number with respect to the wall. The results suggested that, on a dry wall, we should set the normal critical Weber number to 300.
Ruotolo, Francesco; Ruggiero, Gennaro; Vinciguerra, Michela; Iachini, Tina
2012-02-01
The aim of this research is to assess whether the crucial factor in determining the characteristics of blind people's spatial mental images is concerned with the visual impairment per se or the processing style that the dominant perceptual modalities used to acquire spatial information impose, i.e. simultaneous (vision) vs sequential (kinaesthesis). Participants were asked to learn six positions in a large parking area via movement alone (congenitally blind, adventitiously blind, blindfolded sighted) or with vision plus movement (simultaneous sighted, sequential sighted), and then to mentally scan between positions in the path. The crucial manipulation concerned the sequential sighted group. Their visual exploration was made sequential by putting visual obstacles within the pathway in such a way that they could not see simultaneously the positions along the pathway. The results revealed a significant time/distance linear relation in all tested groups. However, the linear component was lower in sequential sighted and blind participants, especially congenital. Sequential sighted and congenitally blind participants showed an almost overlapping performance. Differences between groups became evident when mentally scanning farther distances (more than 5m). This threshold effect could be revealing of processing limitations due to the need of integrating and updating spatial information. Overall, the results suggest that the characteristics of the processing style rather than the visual impairment per se affect blind people's spatial mental images. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Gauvin St-Denis, B.; Landry, T.; Huard, D. B.; Byrns, D.; Chaumont, D.; Foucher, S.
2017-12-01
As the number of scientific studies and policy decisions requiring tailored climate information continues to increase, the demand for support from climate service centers to provide the latest information in the format most helpful for the end-user is also on the rise. Ouranos, being one such organization based in Montreal, has partnered with the Centre de recherche informatique de Montreal (CRIM) to develop a platform that will offer climate data products that have been identified as most useful for users through years of consultation. The platform is built as modular components that target the various requirements of climate data analysis. The data components host and catalog NetCDF data as well as geographical and political delimitations. The analysis components are made available as atomic operations through Web Processing Service (WPS) or as workflows, whereby the operations are chained through a simple JSON structure and executed on a distributed network of computing resources. The visualization components range from Web Map Service (WMS) to a complete frontend for searching the data, launching workflows and interacting with maps of the results. Each component can easily be deployed and executed as an independent service through the use of Docker technology and a proxy is available to regulate user workspaces and access permissions. PAVICS includes various components from birdhouse, a collection of WPS initially developed by the German Climate Research Center (DKRZ) and Institut Pierre Simon Laplace (IPSL) and is designed to be highly interoperable with other WPS as well as many Open Geospatial Consortium (OGC) standards. Further connectivity is made with the Earth System Grid Federation (ESGF) nodes and local results are made searchable using the same API terminology. Other projects conducted by CRIM that integrate with PAVICS include the OGC Testbed 13 Innovation Program (IP) initiative that will enhance advanced cloud capabilities, application packaging deployment processes, as well as enabling Earth Observation (EO) processes relevant to climate. As part of its experimental agenda, working implementations of scalable machine learning on big climate data with Spark and SciSpark were delivered.
Kuich, P. Henning J. L.; Hoffmann, Nils; Kempa, Stefan
2015-01-01
A current bottleneck in GC–MS metabolomics is the processing of raw machine data into a final datamatrix that contains the quantities of identified metabolites in each sample. While there are many bioinformatics tools available to aid the initial steps of the process, their use requires both significant technical expertise and a subsequent manual validation of identifications and alignments if high data quality is desired. The manual validation is tedious and time consuming, becoming prohibitively so as sample numbers increase. We have, therefore, developed Maui-VIA, a solution based on a visual interface that allows experts and non-experts to simultaneously and quickly process, inspect, and correct large numbers of GC–MS samples. It allows for the visual inspection of identifications and alignments, facilitating a unique and, due to its visualization and keyboard shortcuts, very fast interaction with the data. Therefore, Maui-Via fills an important niche by (1) providing functionality that optimizes the component of data processing that is currently most labor intensive to save time and (2) lowering the threshold of expertise required to process GC–MS data. Maui-VIA projects are initiated with baseline-corrected raw data, peaklists, and a database of metabolite spectra and retention indices used for identification. It provides functionality for retention index calculation, a targeted library search, the visual annotation, alignment, correction interface, and metabolite quantification, as well as the export of the final datamatrix. The high quality of data produced by Maui-VIA is illustrated by its comparison to data attained manually by an expert using vendor software on a previously published dataset concerning the response of Chlamydomonas reinhardtii to salt stress. In conclusion, Maui-VIA provides the opportunity for fast, confident, and high-quality data processing validation of large numbers of GC–MS samples by non-experts. PMID:25654076
ERIC Educational Resources Information Center
Vahter, Edna
2015-01-01
In 2010, the renewed national curriculum was legislated in Estonia. Major changes include a new list of cross-curricular topics, increased importance of integration and specification of the components of the art learning process. In this situation, the question arises--how to fully implement the challenges of the renewed curriculum in primary…
Artful Language: Academic Writing for the Art Student
ERIC Educational Resources Information Center
Apps, Linda; Mamchur, Carolyn
2009-01-01
The task of writing about the process of making and contextualising art can be overwhelming for some graduate students. While the challenge may be due in part to limited time and attention to the practice of writing, in a practice-based arts thesis there is a deeper issue: how the visual and written components are attended to in a manner that…
Tuning the mind: Exploring the connections between musical ability and executive functions.
Slevc, L Robert; Davey, Nicholas S; Buschkuehl, Martin; Jaeggi, Susanne M
2016-07-01
A growing body of research suggests that musical experience and ability are related to a variety of cognitive abilities, including executive functioning (EF). However, it is not yet clear if these relationships are limited to specific components of EF, limited to auditory tasks, or reflect very general cognitive advantages. This study investigated the existence and generality of the relationship between musical ability and EFs by evaluating the musical experience and ability of a large group of participants and investigating whether this predicts individual differences on three different components of EF - inhibition, updating, and switching - in both auditory and visual modalities. Musical ability predicted better performance on both auditory and visual updating tasks, even when controlling for a variety of potential confounds (age, handedness, bilingualism, and socio-economic status). However, musical ability was not clearly related to inhibitory control and was unrelated to switching performance. These data thus show that cognitive advantages associated with musical ability are not limited to auditory processes, but are limited to specific aspects of EF. This supports a process-specific (but modality-general) relationship between musical ability and non-musical aspects of cognition. Copyright © 2016 Elsevier B.V. All rights reserved.
Liu, B; Wang, Z; Wu, G; Meng, X
2011-04-28
In this paper, we aim to study the cognitive integration of asynchronous natural or non-natural auditory and visual information in videos of real-world events. Videos with asynchronous semantically consistent or inconsistent natural sound or speech were used as stimuli in order to compare the difference and similarity between multisensory integrations of videos with asynchronous natural sound and speech. The event-related potential (ERP) results showed that N1 and P250 components were elicited irrespective of whether natural sounds were consistent or inconsistent with critical actions in videos. Videos with inconsistent natural sound could elicit N400-P600 effects compared to videos with consistent natural sound, which was similar to the results from unisensory visual studies. Videos with semantically consistent or inconsistent speech could both elicit N1 components. Meanwhile, videos with inconsistent speech would elicit N400-LPN effects in comparison with videos with consistent speech, which showed that this semantic processing was probably related to recognition memory. Moreover, the N400 effect elicited by videos with semantically inconsistent speech was larger and later than that elicited by videos with semantically inconsistent natural sound. Overall, multisensory integration of videos with natural sound or speech could be roughly divided into two stages. For the videos with natural sound, the first stage might reflect the connection between the received information and the stored information in memory; and the second one might stand for the evaluation process of inconsistent semantic information. For the videos with speech, the first stage was similar to the first stage of videos with natural sound; while the second one might be related to recognition memory process. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
A novel BCI based on ERP components sensitive to configural processing of human faces
NASA Astrophysics Data System (ADS)
Zhang, Yu; Zhao, Qibin; Jing, Jin; Wang, Xingyu; Cichocki, Andrzej
2012-04-01
This study introduces a novel brain-computer interface (BCI) based on an oddball paradigm using stimuli of facial images with loss of configural face information (e.g., inversion of face). To the best of our knowledge, till now the configural processing of human faces has not been applied to BCI but widely studied in cognitive neuroscience research. Our experiments confirm that the face-sensitive event-related potential (ERP) components N170 and vertex positive potential (VPP) have reflected early structural encoding of faces and can be modulated by the configural processing of faces. With the proposed novel paradigm, we investigate the effects of ERP components N170, VPP and P300 on target detection for BCI. An eight-class BCI platform is developed to analyze ERPs and evaluate the target detection performance using linear discriminant analysis without complicated feature extraction processing. The online classification accuracy of 88.7% and information transfer rate of 38.7 bits min-1 using stimuli of inverted faces with only single trial suggest that the proposed paradigm based on the configural processing of faces is very promising for visual stimuli-driven BCI applications.
A novel BCI based on ERP components sensitive to configural processing of human faces.
Zhang, Yu; Zhao, Qibin; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej
2012-04-01
This study introduces a novel brain-computer interface (BCI) based on an oddball paradigm using stimuli of facial images with loss of configural face information (e.g., inversion of face). To the best of our knowledge, till now the configural processing of human faces has not been applied to BCI but widely studied in cognitive neuroscience research. Our experiments confirm that the face-sensitive event-related potential (ERP) components N170 and vertex positive potential (VPP) have reflected early structural encoding of faces and can be modulated by the configural processing of faces. With the proposed novel paradigm, we investigate the effects of ERP components N170, VPP and P300 on target detection for BCI. An eight-class BCI platform is developed to analyze ERPs and evaluate the target detection performance using linear discriminant analysis without complicated feature extraction processing. The online classification accuracy of 88.7% and information transfer rate of 38.7 bits min(-1) using stimuli of inverted faces with only single trial suggest that the proposed paradigm based on the configural processing of faces is very promising for visual stimuli-driven BCI applications.
A neural correlate of working memory in the monkey primary visual cortex.
Supèr, H; Spekreijse, H; Lamme, V A
2001-07-06
The brain frequently needs to store information for short periods. In vision, this means that the perceptual correlate of a stimulus has to be maintained temporally once the stimulus has been removed from the visual scene. However, it is not known how the visual system transfers sensory information into a memory component. Here, we identify a neural correlate of working memory in the monkey primary visual cortex (V1). We propose that this component may link sensory activity with memory activity.
Laumen, Geneviève; Tollin, Daniel J.; Beutelmann, Rainer; Klump, Georg M.
2016-01-01
The effect of interaural time difference (ITD) and interaural level difference (ILD) on wave 4 of the binaural and summed monaural auditory brainstem responses (ABRs) as well as on the DN1 component of the binaural interaction component (BIC) of the ABR in young and old Mongolian gerbils (Meriones unguiculatus) was investigated. Measurements were made at a fixed sound pressure level (SPL) and a fixed level above visually detected ABR threshold to compensate for individual hearing threshold differences. In both stimulation modes (fixed SPL and fixed level above visually detected ABR threshold) an effect of ITD on the latency and the amplitude of wave 4 as well as of the BIC was observed. With increasing absolute ITD values BIC latencies were increased and amplitudes were decreased. ILD had a much smaller effect on these measures. Old animals showed a reduced amplitude of the DN1 component. This difference was due to a smaller wave 4 in the summed monaural ABRs of old animals compared to young animals whereas wave 4 in the binaural-evoked ABR showed no age-related difference. In old animals the small amplitude of the DN1 component was correlated with small binaural-evoked wave 1 and wave 3 amplitudes. This suggests that the reduced peripheral input affects central binaural processing which is reflected in the BIC. PMID:27173973
Effect of attentional load on audiovisual speech perception: evidence from ERPs.
Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa
2014-01-01
Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.
Two forms of persistence in visual information processing.
Di Lollo, Vincent; Dixon, Peter
1988-11-01
Iconic memory, which was initially regarded as a unitary phenomenon, has since been subdivided into several components. In the present work we examined the joint effects of two such components (visible persistence and the visual analog representation) on performance in a partial report task. The display consisted of 15 alphabetic characters arranged around the perimeter of an imaginary circle on the face of an oscilloscope. The observer named the character singled out by a bar-probe. Two factors were varied: exposure duration of the array (10, 50, 100, 150, 200, 300, 400 or 500 ms) and duration of blank period (interstimulus interval, ISI) between the termination of the array and the onset of the probe (0, 50, 100, 150, or 200 ms). Performance was progressively impaired as both exposure duration and ISI were increased. The results were explained in terms of a probabilistic combinatorial model in which the timecourses of visible persistence and of the visual analog representation are regarded as time-locked to the onset and to the end of stimulation, respectively. The impairing effect of exposure duration was attributed to the relatively high spatial demands of the task that could be met optimally by information in visible persistence (which declines as a function of exposure duration), but less adequately by information in the visual analog representation. A second experiment, employing a task with lesser spatial demands, confirmed this interpretation.
Colour image segmentation using unsupervised clustering technique for acute leukemia images
NASA Astrophysics Data System (ADS)
Halim, N. H. Abd; Mashor, M. Y.; Nasir, A. S. Abdul; Mustafa, N.; Hassan, R.
2015-05-01
Colour image segmentation has becoming more popular for computer vision due to its important process in most medical analysis tasks. This paper proposes comparison between different colour components of RGB(red, green, blue) and HSI (hue, saturation, intensity) colour models that will be used in order to segment the acute leukemia images. First, partial contrast stretching is applied on leukemia images to increase the visual aspect of the blast cells. Then, an unsupervised moving k-means clustering algorithm is applied on the various colour components of RGB and HSI colour models for the purpose of segmentation of blast cells from the red blood cells and background regions in leukemia image. Different colour components of RGB and HSI colour models have been analyzed in order to identify the colour component that can give the good segmentation performance. The segmented images are then processed using median filter and region growing technique to reduce noise and smooth the images. The results show that segmentation using saturation component of HSI colour model has proven to be the best in segmenting nucleus of the blast cells in acute leukemia image as compared to the other colour components of RGB and HSI colour models.
Behavioural benefits of multisensory processing in ferrets.
Hammond-Kenny, Amy; Bajo, Victoria M; King, Andrew J; Nodal, Fernando R
2017-01-01
Enhanced detection and discrimination, along with faster reaction times, are the most typical behavioural manifestations of the brain's capacity to integrate multisensory signals arising from the same object. In this study, we examined whether multisensory behavioural gains are observable across different components of the localization response that are potentially under the command of distinct brain regions. We measured the ability of ferrets to localize unisensory (auditory or visual) and spatiotemporally coincident auditory-visual stimuli of different durations that were presented from one of seven locations spanning the frontal hemifield. During the localization task, we recorded the head movements made following stimulus presentation, as a metric for assessing the initial orienting response of the ferrets, as well as the subsequent choice of which target location to approach to receive a reward. Head-orienting responses to auditory-visual stimuli were more accurate and faster than those made to visual but not auditory targets, suggesting that these movements were guided principally by sound alone. In contrast, approach-to-target localization responses were more accurate and faster to spatially congruent auditory-visual stimuli throughout the frontal hemifield than to either visual or auditory stimuli alone. Race model inequality analysis of head-orienting reaction times and approach-to-target response times indicates that different processes, probability summation and neural integration, respectively, are likely to be responsible for the effects of multisensory stimulation on these two measures of localization behaviour. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Vigilance and iconic memory in children at high risk for alcoholism.
Steinhauer, S R; Locke, J; Hill, S Y
1997-07-01
Previous studies report reduced visual event-related potential (ERP) amplitudes in young males at high risk for alcoholism. These findings could involve difficulties at several stages of visual processing. This study was aimed at examining vigilance performance and iconic memory functions in children at high risk or low risk for alcoholism. Sustained vigilance and retrieval from iconic memory were evaluated in 54 (29 male) white children at high risk and 47 (25 male) white children at low risk for developing alcoholism. Children were also grouped according to gender and age (younger: 8-12 years; older: 13-18 years). No differences is visual sensitivity, response criterion or reaction time were associated with risk status on the degraded visual stimulus version of the Continuous Performance Test. For the Span of Apprehension, no differences were found due to risk status when only 1 or 5 distractors were presented, although with 9 distractors a significant effect of risk status was found when it was tested as an interaction with gender and age (decreased accuracy for older high-risk boys compared to older low-risk boys). These findings suggest that ERP deviations are not attributable to stages of visual processing deficits, but represent difficulty involving more complex utilization of information. Implications of these results are that the differences between high- and low-risk children that have been reported previously for visual ERP components (e.g., P300) are not attributable to deficits of attentional or iconic memory mechanisms.
Roldan, Stephanie M
2017-01-01
One of the fundamental goals of object recognition research is to understand how a cognitive representation produced from the output of filtered and transformed sensory information facilitates efficient viewer behavior. Given that mental imagery strongly resembles perceptual processes in both cortical regions and subjective visual qualities, it is reasonable to question whether mental imagery facilitates cognition in a manner similar to that of perceptual viewing: via the detection and recognition of distinguishing features. Categorizing the feature content of mental imagery holds potential as a reverse pathway by which to identify the components of a visual stimulus which are most critical for the creation and retrieval of a visual representation. This review will examine the likelihood that the information represented in visual mental imagery reflects distinctive object features thought to facilitate efficient object categorization and recognition during perceptual viewing. If it is the case that these representational features resemble their sensory counterparts in both spatial and semantic qualities, they may well be accessible through mental imagery as evaluated through current investigative techniques. In this review, methods applied to mental imagery research and their findings are reviewed and evaluated for their efficiency in accessing internal representations, and implications for identifying diagnostic features are discussed. An argument is made for the benefits of combining mental imagery assessment methods with diagnostic feature research to advance the understanding of visual perceptive processes, with suggestions for avenues of future investigation.
Roldan, Stephanie M.
2017-01-01
One of the fundamental goals of object recognition research is to understand how a cognitive representation produced from the output of filtered and transformed sensory information facilitates efficient viewer behavior. Given that mental imagery strongly resembles perceptual processes in both cortical regions and subjective visual qualities, it is reasonable to question whether mental imagery facilitates cognition in a manner similar to that of perceptual viewing: via the detection and recognition of distinguishing features. Categorizing the feature content of mental imagery holds potential as a reverse pathway by which to identify the components of a visual stimulus which are most critical for the creation and retrieval of a visual representation. This review will examine the likelihood that the information represented in visual mental imagery reflects distinctive object features thought to facilitate efficient object categorization and recognition during perceptual viewing. If it is the case that these representational features resemble their sensory counterparts in both spatial and semantic qualities, they may well be accessible through mental imagery as evaluated through current investigative techniques. In this review, methods applied to mental imagery research and their findings are reviewed and evaluated for their efficiency in accessing internal representations, and implications for identifying diagnostic features are discussed. An argument is made for the benefits of combining mental imagery assessment methods with diagnostic feature research to advance the understanding of visual perceptive processes, with suggestions for avenues of future investigation. PMID:28588538
Visual training improves perceptual grouping based on basic stimulus features.
Kurylo, Daniel D; Waxman, Richard; Kidron, Rachel; Silverstein, Steven M
2017-10-01
Training on visual tasks improves performance on basic and higher order visual capacities. Such improvement has been linked to changes in connectivity among mediating neurons. We investigated whether training effects occur for perceptual grouping. It was hypothesized that repeated engagement of integration mechanisms would enhance grouping processes. Thirty-six participants underwent 15 sessions of training on a visual discrimination task that required perceptual grouping. Participants viewed 20 × 20 arrays of dots or Gabor patches and indicated whether the array appeared grouped as vertical or horizontal lines. Across trials stimuli became progressively disorganized, contingent upon successful discrimination. Four visual dimensions were examined, in which grouping was based on similarity in luminance, color, orientation, and motion. Psychophysical thresholds of grouping were assessed before and after training. Results indicate that performance in all four dimensions improved with training. Training on a control condition, which paralleled the discrimination task but without a grouping component, produced no improvement. In addition, training on only the luminance and orientation dimensions improved performance for those conditions as well as for grouping by color, on which training had not occurred. However, improvement from partial training did not generalize to motion. Results demonstrate that a training protocol emphasizing stimulus integration enhanced perceptual grouping. Results suggest that neural mechanisms mediating grouping by common luminance and/or orientation contribute to those mediating grouping by color but do not share resources for grouping by common motion. Results are consistent with theories of perceptual learning emphasizing plasticity in early visual processing regions.
Overview of EVE - the event visualization environment of ROOT
NASA Astrophysics Data System (ADS)
Tadel, Matevž
2010-04-01
EVE is a high-level visualization library using ROOT's data-processing, GUI and OpenGL interfaces. It is designed as a framework for object management offering hierarchical data organization, object interaction and visualization via GUI and OpenGL representations. Automatic creation of 2D projected views is also supported. On the other hand, it can serve as an event visualization toolkit satisfying most HEP requirements: visualization of geometry, simulated and reconstructed data such as hits, clusters, tracks and calorimeter information. Special classes are available for visualization of raw-data. Object-interaction layer allows for easy selection and highlighting of objects and their derived representations (projections) across several views (3D, Rho-Z, R-Phi). Object-specific tooltips are provided in both GUI and GL views. The visual-configuration layer of EVE is built around a data-base of template objects that can be applied to specific instances of visualization objects to ensure consistent object presentation. The data-base can be retrieved from a file, edited during the framework operation and stored to file. EVE prototype was developed within the ALICE collaboration and has been included into ROOT in December 2007. Since then all EVE components have reached maturity. EVE is used as the base of AliEve visualization framework in ALICE, Firework physics-oriented event-display in CMS, and as the visualization engine of FairRoot in FAIR.
On the Encoding of Panoramic Visual Scenes in Navigating Wood Ants.
Buehlmann, Cornelia; Woodgate, Joseph L; Collett, Thomas S
2016-08-08
A natural visual panorama is a complex stimulus formed of many component shapes. It gives an animal a sense of place and supplies guiding signals for controlling the animal's direction of travel [1]. Insects with their economical neural processing [2] are good subjects for analyzing the encoding and memory of such scenes [3-5]. Honeybees [6] and ants [7, 8] foraging from their nest can follow habitual routes guided only by visual cues within a natural panorama. Here, we analyze the headings that ants adopt when a familiar panorama composed of two or three shapes is manipulated by removing a shape or by replacing training shapes with unfamiliar ones. We show that (1) ants recognize a component shape not only through its particular visual features, but also by its spatial relation to other shapes in the scene, and that (2) each segmented shape [9] contributes its own directional signal to generating the ant's chosen heading. We found earlier that ants trained to a feeder placed to one side of a single shape [10] and tested with shapes of different widths learn the retinal position of the training shape's center of mass (CoM) [11, 12] when heading toward the feeder. They then guide themselves by placing the shape's CoM in the remembered retinal position [10]. This use of CoM in a one-shape panorama combined with the results here suggests that the ants' memory of a multi-shape panorama comprises the retinal positions of the horizontal CoMs of each major component shape within the scene, bolstered by local descriptors of that shape. Copyright © 2016 Elsevier Ltd. All rights reserved.
Motion processing with two eyes in three dimensions.
Rokers, Bas; Czuba, Thaddeus B; Cormack, Lawrence K; Huk, Alexander C
2011-02-11
The movement of an object toward or away from the head is perhaps the most critical piece of information an organism can extract from its environment. Such 3D motion produces horizontally opposite motions on the two retinae. Little is known about how or where the visual system combines these two retinal motion signals, relative to the wealth of knowledge about the neural hierarchies involved in 2D motion processing and binocular vision. Canonical conceptions of primate visual processing assert that neurons early in the visual system combine monocular inputs into a single cyclopean stream (lacking eye-of-origin information) and extract 1D ("component") motions; later stages then extract 2D pattern motion from the cyclopean output of the earlier stage. Here, however, we show that 3D motion perception is in fact affected by the comparison of opposite 2D pattern motions between the two eyes. Three-dimensional motion sensitivity depends systematically on pattern motion direction when dichoptically viewing gratings and plaids-and a novel "dichoptic pseudoplaid" stimulus provides strong support for use of interocular pattern motion differences by precluding potential contributions from conventional disparity-based mechanisms. These results imply the existence of eye-of-origin information in later stages of motion processing and therefore motivate the incorporation of such eye-specific pattern-motion signals in models of motion processing and binocular integration.
Dissociation of neural mechanisms underlying orientation processing in humans
Ling, Sam; Pearson, Joel; Blake, Randolph
2009-01-01
Summary Orientation selectivity is a fundamental, emergent property of neurons in early visual cortex, and discovery of that property [1, 2] dramatically shaped how we conceptualize visual processing [3–6]. However, much remains unknown about the neural substrates of these basic building blocks of perception, and what is known primarily stems from animal physiology studies. To probe the neural concomitants of orientation processing in humans, we employed repetitive transcranial magnetic stimulation (rTMS) to attenuate neural responses evoked by stimuli presented within a local region of the visual field. Previous physiological studies have shown that rTMS can significantly suppress the neuronal spiking activity, hemodynamic responses, and local field potentials within a focused cortical region [7, 8]. By suppressing neural activity with rTMS, we were able to dissociate components of the neural circuitry underlying two distinct aspects of orientation processing: selectivity and contextual effects. Orientation selectivity gauged by masking was unchanged by rTMS, whereas an otherwise robust orientation repulsion illusion was weakened following rTMS. This dissociation implies that orientation processing relies on distinct mechanisms, only one of which was impacted by rTMS. These results are consistent with models positing that orientation selectivity is largely governed by the patterns of convergence of thalamic afferents onto cortical neurons, with intracortical activity then shaping population responses contained within those orientation-selective cortical neurons. PMID:19682905
Revealing 3D Ultrastructure and Morphology of Stem Cell Spheroids by Electron Microscopy.
Jaros, Josef; Petrov, Michal; Tesarova, Marketa; Hampl, Ales
2017-01-01
Cell culture methods have been developed in efforts to produce biologically relevant systems for developmental and disease modeling, and appropriate analytical tools are essential. Knowledge of ultrastructural characteristics represents the basis to reveal in situ the cellular morphology, cell-cell interactions, organelle distribution, niches in which cells reside, and many more. The traditional method for 3D visualization of ultrastructural components, serial sectioning using transmission electron microscopy (TEM), is very labor-intensive due to contentious TEM slice preparation and subsequent image processing of the whole collection. In this chapter, we present serial block-face scanning electron microscopy, together with complex methodology for spheroid formation, contrasting of cellular compartments, image processing, and 3D visualization. The described technique is effective for detailed morphological analysis of stem cell spheroids, organoids, as well as organotypic cell cultures.
Percolation under noise: Detecting explosive percolation using the second-largest component
NASA Astrophysics Data System (ADS)
Viles, Wes; Ginestet, Cedric E.; Tang, Ariana; Kramer, Mark A.; Kolaczyk, Eric D.
2016-05-01
We consider the problem of distinguishing between different rates of percolation under noise. A statistical model of percolation is constructed allowing for the birth and death of edges as well as the presence of noise in the observations. This graph-valued stochastic process is composed of a latent and an observed nonstationary process, where the observed graph process is corrupted by type-I and type-II errors. This produces a hidden Markov graph model. We show that for certain choices of parameters controlling the noise, the classical (Erdős-Rényi) percolation is visually indistinguishable from a more rapid form of percolation. In this setting, we compare two different criteria for discriminating between these two percolation models, based on the interquartile range (IQR) of the first component's size, and on the maximal size of the second-largest component. We show through data simulations that this second criterion outperforms the IQR of the first component's size, in terms of discriminatory power. The maximal size of the second component therefore provides a useful statistic for distinguishing between different rates of percolation, under physically motivated conditions for the birth and death of edges, and under noise. The potential application of the proposed criteria for the detection of clinically relevant percolation in the context of applied neuroscience is also discussed.
Attention Modulates TMS-Locked Alpha Oscillations in the Visual Cortex.
Herring, Jim D; Thut, Gregor; Jensen, Ole; Bergmann, Til O
2015-10-28
Cortical oscillations, such as 8-12 Hz alpha-band activity, are thought to subserve gating of information processing in the human brain. While most of the supporting evidence is correlational, causal evidence comes from attempts to externally drive ("entrain") these oscillations by transcranial magnetic stimulation (TMS). Indeed, the frequency profile of TMS-evoked potentials (TEPs) closely resembles that of oscillations spontaneously emerging in the same brain region. However, it is unclear whether TMS-locked and spontaneous oscillations are produced by the same neuronal mechanisms. If so, they should react in a similar manner to top-down modulation by endogenous attention. To test this prediction, we assessed the alpha-like EEG response to TMS of the visual cortex during periods of high and low visual attention while participants attended to either the visual or auditory modality in a cross-modal attention task. We observed a TMS-locked local oscillatory alpha response lasting several cycles after TMS (but not after sham stimulation). Importantly, TMS-locked alpha power was suppressed during deployment of visual relative to auditory attention, mirroring spontaneous alpha amplitudes. In addition, the early N40 TEP component, located at the stimulation site, was amplified by visual attention. The extent of attentional modulation for both TMS-locked alpha power and N40 amplitude did depend, with opposite sign, on the individual ability to modulate spontaneous alpha power at the stimulation site. We therefore argue that TMS-locked and spontaneous oscillations are of common neurophysiological origin, whereas the N40 TEP component may serve as an index of current cortical excitability at the time of stimulation. Copyright © 2015 Herring et al.
Jerath, Ravinder; Cearley, Shannon M; Barnes, Vernon A; Jensen, Mike
2018-01-01
A fundamental function of the visual system is detecting motion, yet visual perception is poorly understood. Current research has determined that the retina and ganglion cells elicit responses for motion detection; however, the underlying mechanism for this is incompletely understood. Previously we proposed that retinogeniculo-cortical oscillations and photoreceptors work in parallel to process vision. Here we propose that motion could also be processed within the retina, and not in the brain as current theory suggests. In this paper, we discuss: 1) internal neural space formation; 2) primary, secondary, and tertiary roles of vision; 3) gamma as the secondary role; and 4) synchronization and coherence. Movement within the external field is instantly detected by primary processing within the space formed by the retina, providing a unified view of the world from an internal point of view. Our new theory begins to answer questions about: 1) perception of space, erect images, and motion, 2) purpose of lateral inhibition, 3) speed of visual perception, and 4) how peripheral color vision occurs without a large population of cones located peripherally in the retina. We explain that strong oscillatory activity influences on brain activity and is necessary for: 1) visual processing, and 2) formation of the internal visuospatial area necessary for visual consciousness, which could allow rods to receive precise visual and visuospatial information, while retinal waves could link the lateral geniculate body with the cortex to form a neural space formed by membrane potential-based oscillations and photoreceptors. We propose that vision is tripartite, with three components that allow a person to make sense of the world, terming them "primary, secondary, and tertiary roles" of vision. Finally, we propose that Gamma waves that are higher in strength and volume allow communication among the retina, thalamus, and various areas of the cortex, and synchronization brings cortical faculties to the retina, while the thalamus is the link that couples the retina to the rest of the brain through activity by gamma oscillations. This novel theory lays groundwork for further research by providing a theoretical understanding that expands upon the functions of the retina, photoreceptors, and retinal plexus to include parallel processing needed to form the internal visual space that we perceive as the external world. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wang, Jie-sheng; Han, Shuang; Shen, Na-na
2014-01-01
For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, an echo state network (ESN) based fusion soft-sensor model optimized by the improved glowworm swarm optimization (GSO) algorithm is proposed. Firstly, the color feature (saturation and brightness) and texture features (angular second moment, sum entropy, inertia moment, etc.) based on grey-level co-occurrence matrix (GLCM) are adopted to describe the visual characteristics of the flotation froth image. Then the kernel principal component analysis (KPCA) method is used to reduce the dimensionality of the high-dimensional input vector composed by the flotation froth image characteristics and process datum and extracts the nonlinear principal components in order to reduce the ESN dimension and network complex. The ESN soft-sensor model of flotation process is optimized by the GSO algorithm with congestion factor. Simulation results show that the model has better generalization and prediction accuracy to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:24982935
Time-Domain Receiver Function Deconvolution using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Moreira, L. P.
2017-12-01
Receiver Functions (RF) are well know method for crust modelling using passive seismological signals. Many different techniques were developed to calculate the RF traces, applying the deconvolution calculation to radial and vertical seismogram components. A popular method used a spectral division of both components, which requires human intervention to apply the Water Level procedure to avoid instabilities from division by small numbers. One of most used method is an iterative procedure to estimate the RF peaks and applying the convolution with vertical component seismogram, comparing the result with the radial component. This method is suitable for automatic processing, however several RF traces are invalid due to peak estimation failure.In this work it is proposed a deconvolution algorithm using Genetic Algorithm (GA) to estimate the RF peaks. This method is entirely processed in the time domain, avoiding the time-to-frequency calculations (and vice-versa), and totally suitable for automatic processing. Estimated peaks can be used to generate RF traces in a seismogram format for visualization. The RF trace quality is similar for high magnitude events, although there are less failures for RF calculation of smaller events, increasing the overall performance for high number of events per station.
Spatial and temporal coherence in perceptual binding
Blake, Randolph; Yang, Yuede
1997-01-01
Component visual features of objects are registered by distributed patterns of activity among neurons comprising multiple pathways and visual areas. How these distributed patterns of activity give rise to unified representations of objects remains unresolved, although one recent, controversial view posits temporal coherence of neural activity as a binding agent. Motivated by the possible role of temporal coherence in feature binding, we devised a novel psychophysical task that requires the detection of temporal coherence among features comprising complex visual images. Results show that human observers can more easily detect synchronized patterns of temporal contrast modulation within hybrid visual images composed of two components when those components are drawn from the same original picture. Evidently, time-varying changes within spatially coherent features produce more salient neural signals. PMID:9192701
An evaluation of unisensory and multisensory adaptive flight-path navigation displays
NASA Astrophysics Data System (ADS)
Moroney, Brian W.
1999-11-01
The present study assessed the use of unimodal (auditory or visual) and multimodal (audio-visual) adaptive interfaces to aid military pilots in the performance of a precision-navigation flight task when they were confronted with additional information-processing loads. A standard navigation interface was supplemented by adaptive interfaces consisting of either a head-up display based flight director, a 3D virtual audio interface, or a combination of the two. The adaptive interfaces provided information about how to return to the pathway when off course. Using an advanced flight simulator, pilots attempted two navigation scenarios: (A) maintain proper course under normal flight conditions and (B) return to course after their aircraft's position has been perturbed. Pilots flew in the presence or absence of an additional information-processing task presented in either the visual or auditory modality. The additional information-processing tasks were equated in terms of perceived mental workload as indexed by the NASA-TLX. Twelve experienced military pilots (11 men and 1 woman), naive to the purpose of the experiment, participated in the study. They were recruited from Wright-Patterson Air Force Base and had a mean of 2812 hrs. of flight experience. Four navigational interface configurations, the standard visual navigation interface alone (SV), SV plus adaptive visual, SV plus adaptive auditory, and SV plus adaptive visual-auditory composite were combined factorially with three concurrent tasks (CT), the no CT, the visual CT, and the auditory CT, a completely repeated measures design. The adaptive navigation displays were activated whenever the aircraft was more than 450 ft off course. In the normal flight scenario, the adaptive interfaces did not bolster navigation performance in comparison to the standard interface. It is conceivable that the pilots performed quite adequately using the familiar generic interface under normal flight conditions and hence showed no added benefit of the adaptive interfaces. In the return-to-course scenario, the relative advantages of the three adaptive interfaces were dependent upon the nature of the CT in a complex way. In the absence of a CT, recovery heading performance was superior with the adaptive visual and adaptive composite interfaces compared to the adaptive auditory interface. In the context of a visual CT, recovery when using the adaptive composite interface was superior to that when using the adaptive visual interface. Post-experimental inquiry indicated that when faced with a visual CT, the pilots used the auditory component of the multimodal guidance display to detect gross heading errors and the visual component to make more fine-grained heading adjustments. In the context of the auditory CT, navigation performance using the adaptive visual interface tended to be superior to that when using the adaptive auditory interface. Neither CT performance nor NASA-TLX workload level was influenced differentially by the interface configurations. Thus, the potential benefits associated with the proposed interfaces appear to be unaccompanied by negative side effects involving CT interference and workload. The adaptive interface configurations were altered without any direct input from the pilot. Thus, it was feared that pilots might reject the activation of interfaces independent of their control. However, pilots' debriefing comments about the efficacy of the adaptive interface approach were very positive. (Abstract shortened by UMI.)
Tracking without perceiving: a dissociation between eye movements and motion perception.
Spering, Miriam; Pomplun, Marc; Carrasco, Marisa
2011-02-01
Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept.
Tracking Without Perceiving: A Dissociation Between Eye Movements and Motion Perception
Spering, Miriam; Pomplun, Marc; Carrasco, Marisa
2011-01-01
Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept. PMID:21189353