Sample records for complex perceptual input

  1. Information-Processing Modules and Their Relative Modality Specificity

    ERIC Educational Resources Information Center

    Anderson, John R.; Qin, Yulin; Jung, Kwan-Jin; Carter, Cameron S.

    2007-01-01

    This research uses fMRI to understand the role of eight cortical regions in a relatively complex information-processing task. Modality of input (visual versus auditory) and modality of output (manual versus vocal) are manipulated. Two perceptual regions (auditory cortex and fusiform gyrus) only reflected perceptual encoding. Two motor regions were…

  2. Pupil dilation reflects perceptual selection and predicts subsequent stability in perceptual rivalry

    PubMed Central

    Einhäuser, Wolfgang; Stout, James; Koch, Christof; Carter, Olivia

    2008-01-01

    During sustained viewing of an ambiguous stimulus, an individual's perceptual experience will generally switch between the different possible alternatives rather than stay fixed on one interpretation (perceptual rivalry). Here, we measured pupil diameter while subjects viewed different ambiguous visual and auditory stimuli. For all stimuli tested, pupil diameter increased just before the reported perceptual switch and the relative amount of dilation before this switch was a significant predictor of the subsequent duration of perceptual stability. These results could not be explained by blink or eye-movement effects, the motor response or stimulus driven changes in retinal input. Because pupil dilation reflects levels of norepinephrine (NE) released from the locus coeruleus (LC), we interpret these results as suggestive that the LC–NE complex may play the same role in perceptual selection as in behavioral decision making. PMID:18250340

  3. Information Processing by Schizophrenics When Task Complexity Increases

    ERIC Educational Resources Information Center

    Hirt, Michael; And Others

    1977-01-01

    The performance of hospitalized paranoid schizophrenics, nonparanoids, and hospitalized controls was compared on motor, perceptual, and cognitive tasks of increasing complexity. The data were examined within the context of comparing differential predictions made by input and central processing theories of information-processing deficit. (Editor)

  4. Perceptual conflict during sensorimotor integration processes - a neurophysiological study in response inhibition.

    PubMed

    Chmielewski, Witold X; Beste, Christian

    2016-05-25

    A multitude of sensory inputs needs to be processed during sensorimotor integration. A crucial factor for detecting relevant information is its complexity, since information content can be conflicting at a perceptual level. This may be central to executive control processes, such as response inhibition. This EEG study aims to investigate the system neurophysiological mechanisms behind effects of perceptual conflict on response inhibition. We systematically modulated perceptual conflict by integrating a Global-local task with a Go/Nogo paradigm. The results show that conflicting perceptual information, in comparison to non-conflicting perceptual information, impairs response inhibition performance. This effect was evident regardless of whether the relevant information for response inhibition is displayed on the global, or local perceptual level. The neurophysiological data suggests that early perceptual/ attentional processing stages do not underlie these modulations. Rather, processes at the response selection level (P3), play a role in changed response inhibition performance. This conflict-related impairment of inhibitory processes is associated with activation differences in (inferior) parietal areas (BA7 and BA40) and not as commonly found in the medial prefrontal areas. This suggests that various functional neuroanatomical structures may mediate response inhibition and that the functional neuroanatomical structures involved depend on the complexity of sensory integration processes.

  5. Central mechanisms of odour object perception

    PubMed Central

    Gottfried, Jay A.

    2013-01-01

    The stimulus complexity of naturally occurring odours presents unique challenges for central nervous systems that are aiming to internalize the external olfactory landscape. One mechanism by which the brain encodes perceptual representations of behaviourally relevant smells is through the synthesis of different olfactory inputs into a unified perceptual experience — an odour object. Recent evidence indicates that the identification, categorization and discrimination of olfactory stimuli rely on the formation and modulation of odour objects in the piriform cortex. Convergent findings from human and rodent models suggest that distributed piriform ensemble patterns of olfactory qualities and categories are crucial for maintaining the perceptual constancy of ecologically inconstant stimuli. PMID:20700142

  6. Perception and Cognition in the Ageing Brain: A Brief Review of the Short- and Long-Term Links between Perceptual and Cognitive Decline

    PubMed Central

    Roberts, Katherine L.; Allen, Harriet A.

    2016-01-01

    Ageing is associated with declines in both perception and cognition. We review evidence for an interaction between perceptual and cognitive decline in old age. Impoverished perceptual input can increase the cognitive difficulty of tasks, while changes to cognitive strategies can compensate, to some extent, for impaired perception. While there is strong evidence from cross-sectional studies for a link between sensory acuity and cognitive performance in old age, there is not yet compelling evidence from longitudinal studies to suggest that poor perception causes cognitive decline, nor to demonstrate that correcting sensory impairment can improve cognition in the longer term. Most studies have focused on relatively simple measures of sensory (visual and auditory) acuity, but more complex measures of suprathreshold perceptual processes, such as temporal processing, can show a stronger link with cognition. The reviewed evidence underlines the importance of fully accounting for perceptual deficits when investigating cognitive decline in old age. PMID:26973514

  7. Recurrence Quantification Analysis of Processes and Products of Discourse: A Tutorial in R

    ERIC Educational Resources Information Center

    Wallot, Sebastian

    2017-01-01

    Processes of naturalistic reading and writing are based on complex linguistic input, stretch-out over time, and rely on an integrated performance of multiple perceptual, cognitive, and motor processes. Hence, naturalistic reading and writing performance is nonstationary and exhibits fluctuations and transitions. However, instead of being just…

  8. Searching for the Elements of Thought: Reply to Franklin, Mrazek, Broadway, and Schooler (2013)

    ERIC Educational Resources Information Center

    Smallwood, Jonathan

    2013-01-01

    Understanding thoughts with no perceptual basis is a complex problem, and the commentary by Franklin, Mrazek, Broadway, and Schooler (2013) highlighted some of the difficulties that can occur when theorizing about this topic. They argued that the suppression of external input during internal thought arises from the selection of internal…

  9. Perceptual Training Strongly Improves Visual Motion Perception in Schizophrenia

    ERIC Educational Resources Information Center

    Norton, Daniel J.; McBain, Ryan K.; Ongur, Dost; Chen, Yue

    2011-01-01

    Schizophrenia patients exhibit perceptual and cognitive deficits, including in visual motion processing. Given that cognitive systems depend upon perceptual inputs, improving patients' perceptual abilities may be an effective means of cognitive intervention. In healthy people, motion perception can be enhanced through perceptual learning, but it…

  10. Perceptual Learning of Noise Vocoded Words: Effects of Feedback and Lexicality

    ERIC Educational Resources Information Center

    Hervais-Adelman, Alexis; Davis, Matthew H.; Johnsrude, Ingrid S.; Carlyon, Robert P.

    2008-01-01

    Speech comprehension is resistant to acoustic distortion in the input, reflecting listeners' ability to adjust perceptual processes to match the speech input. This adjustment is reflected in improved comprehension of distorted speech with experience. For noise vocoding, a manipulation that removes spectral detail from speech, listeners' word…

  11. Testing sensory evidence against mnemonic templates

    PubMed Central

    Myers, Nicholas E; Rohenkohl, Gustavo; Wyart, Valentin; Woolrich, Mark W; Nobre, Anna C; Stokes, Mark G

    2015-01-01

    Most perceptual decisions require comparisons between current input and an internal template. Classic studies propose that templates are encoded in sustained activity of sensory neurons. However, stimulus encoding is itself dynamic, tracing a complex trajectory through activity space. Which part of this trajectory is pre-activated to reflect the template? Here we recorded magneto- and electroencephalography during a visual target-detection task, and used pattern analyses to decode template, stimulus, and decision-variable representation. Our findings ran counter to the dominant model of sustained pre-activation. Instead, template information emerged transiently around stimulus onset and quickly subsided. Cross-generalization between stimulus and template coding, indicating a shared neural representation, occurred only briefly. Our results are compatible with the proposal that template representation relies on a matched filter, transforming input into task-appropriate output. This proposal was consistent with a signed difference response at the perceptual decision stage, which can be explained by a simple neural model. DOI: http://dx.doi.org/10.7554/eLife.09000.001 PMID:26653854

  12. Visual Perceptual Echo Reflects Learning of Regularities in Rapid Luminance Sequences.

    PubMed

    Chang, Acer Y-C; Schwartzman, David J; VanRullen, Rufin; Kanai, Ryota; Seth, Anil K

    2017-08-30

    A novel neural signature of active visual processing has recently been described in the form of the "perceptual echo", in which the cross-correlation between a sequence of randomly fluctuating luminance values and occipital electrophysiological signals exhibits a long-lasting periodic (∼100 ms cycle) reverberation of the input stimulus (VanRullen and Macdonald, 2012). As yet, however, the mechanisms underlying the perceptual echo and its function remain unknown. Reasoning that natural visual signals often contain temporally predictable, though nonperiodic features, we hypothesized that the perceptual echo may reflect a periodic process associated with regularity learning. To test this hypothesis, we presented subjects with successive repetitions of a rapid nonperiodic luminance sequence, and examined the effects on the perceptual echo, finding that echo amplitude linearly increased with the number of presentations of a given luminance sequence. These data suggest that the perceptual echo reflects a neural signature of regularity learning.Furthermore, when a set of repeated sequences was followed by a sequence with inverted luminance polarities, the echo amplitude decreased to the same level evoked by a novel stimulus sequence. Crucially, when the original stimulus sequence was re-presented, the echo amplitude returned to a level consistent with the number of presentations of this sequence, indicating that the visual system retained sequence-specific information, for many seconds, even in the presence of intervening visual input. Altogether, our results reveal a previously undiscovered regularity learning mechanism within the human visual system, reflected by the perceptual echo. SIGNIFICANCE STATEMENT How the brain encodes and learns fast-changing but nonperiodic visual input remains unknown, even though such visual input characterizes natural scenes. We investigated whether the phenomenon of "perceptual echo" might index such learning. The perceptual echo is a long-lasting reverberation between a rapidly changing visual input and evoked neural activity, apparent in cross-correlations between occipital EEG and stimulus sequences, peaking in the alpha (∼10 Hz) range. We indeed found that perceptual echo is enhanced by repeatedly presenting the same visual sequence, indicating that the human visual system can rapidly and automatically learn regularities embedded within fast-changing dynamic sequences. These results point to a previously undiscovered regularity learning mechanism, operating at a rate defined by the alpha frequency. Copyright © 2017 the authors 0270-6474/17/378486-12$15.00/0.

  13. On the cyclic nature of perception in vision versus audition

    PubMed Central

    VanRullen, Rufin; Zoefel, Benedikt; Ilhan, Barkin

    2014-01-01

    Does our perceptual awareness consist of a continuous stream, or a discrete sequence of perceptual cycles, possibly associated with the rhythmic structure of brain activity? This has been a long-standing question in neuroscience. We review recent psychophysical and electrophysiological studies indicating that part of our visual awareness proceeds in approximately 7–13 Hz cycles rather than continuously. On the other hand, experimental attempts at applying similar tools to demonstrate the discreteness of auditory awareness have been largely unsuccessful. We argue and demonstrate experimentally that visual and auditory perception are not equally affected by temporal subsampling of their respective input streams: video sequences remain intelligible at sampling rates of two to three frames per second, whereas audio inputs lose their fine temporal structure, and thus all significance, below 20–30 samples per second. This does not mean, however, that our auditory perception must proceed continuously. Instead, we propose that audition could still involve perceptual cycles, but the periodic sampling should happen only after the stage of auditory feature extraction. In addition, although visual perceptual cycles can follow one another at a spontaneous pace largely independent of the visual input, auditory cycles may need to sample the input stream more flexibly, by adapting to the temporal structure of the auditory inputs. PMID:24639585

  14. Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.

    PubMed

    Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk

    2018-07-01

    Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.

  15. Synchronous and asynchronous perceptual bindings of colour and motion following identical stimulations.

    PubMed

    McIntyre, Morgan E; Arnold, Derek H

    2018-05-01

    When a moving surface alternates in colour and direction, perceptual couplings of colour and motion can differ from their physical correspondence. Periods of motion tend to be perceptually bound with physically delayed colours - a colour/motion perceptual asynchrony. This can be eliminated by motion transparency. Here we show that the colour/motion perceptual asynchrony is not invariably eliminated by motion transparency. Nor is it an inevitable consequence given a particular physical input. Instead, it can emerge when moving surfaces are perceived as alternating in direction, even if those surfaces seem transparent, and it is eliminated when surfaces are perceived as moving invariably. For a given observer either situation can result from exposure to a common input. Our findings suggest that neural events that promote the perception of motion reversals are causal of the colour/motion perceptual asynchrony. Moreover, they suggest that motion transparency and coherence can be signalled simultaneously by subpopulations of direction-selective neurons, with this conflict instantaneously resolved by a competitive winner-takes-all interaction, which can instantiate or eliminate colour/motion perceptual asynchrony. Copyright © 2017. Published by Elsevier Ltd.

  16. Spatiotemporal dynamics of random stimuli account for trial-to-trial variability in perceptual decision making

    PubMed Central

    Park, Hame; Lueckmann, Jan-Matthis; von Kriegstein, Katharina; Bitzer, Sebastian; Kiebel, Stefan J.

    2016-01-01

    Decisions in everyday life are prone to error. Standard models typically assume that errors during perceptual decisions are due to noise. However, it is unclear how noise in the sensory input affects the decision. Here we show that there are experimental tasks for which one can analyse the exact spatio-temporal details of a dynamic sensory noise and better understand variability in human perceptual decisions. Using a new experimental visual tracking task and a novel Bayesian decision making model, we found that the spatio-temporal noise fluctuations in the input of single trials explain a significant part of the observed responses. Our results show that modelling the precise internal representations of human participants helps predict when perceptual decisions go wrong. Furthermore, by modelling precisely the stimuli at the single-trial level, we were able to identify the underlying mechanism of perceptual decision making in more detail than standard models. PMID:26752272

  17. Perceptual integration without conscious access

    PubMed Central

    van Leeuwen, Jonathan; Olivers, Christian N. L.

    2017-01-01

    The visual system has the remarkable ability to integrate fragmentary visual input into a perceptually organized collection of surfaces and objects, a process we refer to as perceptual integration. Despite a long tradition of perception research, it is not known whether access to consciousness is required to complete perceptual integration. To investigate this question, we manipulated access to consciousness using the attentional blink. We show that, behaviorally, the attentional blink impairs conscious decisions about the presence of integrated surface structure from fragmented input. However, despite conscious access being impaired, the ability to decode the presence of integrated percepts remains intact, as shown through multivariate classification analyses of electroencephalogram (EEG) data. In contrast, when disrupting perception through masking, decisions about integrated percepts and decoding of integrated percepts are impaired in tandem, while leaving feedforward representations intact. Together, these data show that access consciousness and perceptual integration can be dissociated. PMID:28325878

  18. Variability of perceptual multistability: from brain state to individual trait

    PubMed Central

    Kleinschmidt, Andreas; Sterzer, Philipp; Rees, Geraint

    2012-01-01

    Few phenomena are as suitable as perceptual multistability to demonstrate that the brain constructively interprets sensory input. Several studies have outlined the neural circuitry involved in generating perceptual inference but only more recently has the individual variability of this inferential process been appreciated. Studies of the interaction of evoked and ongoing neural activity show that inference itself is not merely a stimulus-triggered process but is related to the context of the current brain state into which the processing of external stimulation is embedded. As brain states fluctuate, so does perception of a given sensory input. In multistability, perceptual fluctuation rates are consistent for a given individual but vary considerably between individuals. There has been some evidence for a genetic basis for these individual differences and recent morphometric studies of parietal lobe regions have identified neuroanatomical substrates for individual variability in spontaneous switching behaviour. Moreover, disrupting the function of these latter regions by transcranial magnetic stimulation yields systematic interference effects on switching behaviour, further arguing for a causal role of these regions in perceptual inference. Together, these studies have advanced our understanding of the biological mechanisms by which the brain constructs the contents of consciousness from sensory input. PMID:22371620

  19. The strength of attentional biases reduces as visual short-term memory load increases

    PubMed Central

    Shimi, A.

    2013-01-01

    Despite our visual system receiving irrelevant input that competes with task-relevant signals, we are able to pursue our perceptual goals. Attention enhances our visual processing by biasing the processing of the input that is relevant to the task at hand. The top-down signals enabling these biases are therefore important for regulating lower level sensory mechanisms. In three experiments, we examined whether we apply similar biases to successfully maintain information in visual short-term memory (VSTM). We presented participants with targets alongside distracters and we graded their perceptual similarity to vary the extent to which they competed. Experiments 1 and 2 showed that the more items held in VSTM before the onset of the distracters, the more perceptually distinct the distracters needed to be for participants to retain the target accurately. Experiment 3 extended these behavioral findings by demonstrating that the perceptual similarity between target and distracters exerted a significantly greater effect on occipital alpha amplitudes, depending on the number of items already held in VSTM. The trade-off between VSTM load and target-distracter competition suggests that VSTM and perceptual competition share a partially overlapping mechanism, namely top-down inputs into sensory areas. PMID:23576694

  20. Chromatic Perceptual Learning but No Category Effects without Linguistic Input.

    PubMed

    Grandison, Alexandra; Sowden, Paul T; Drivonikou, Vicky G; Notman, Leslie A; Alexander, Iona; Davies, Ian R L

    2016-01-01

    Perceptual learning involves an improvement in perceptual judgment with practice, which is often specific to stimulus or task factors. Perceptual learning has been shown on a range of visual tasks but very little research has explored chromatic perceptual learning. Here, we use two low level perceptual threshold tasks and a supra-threshold target detection task to assess chromatic perceptual learning and category effects. Experiment 1 investigates whether chromatic thresholds reduce as a result of training and at what level of analysis learning effects occur. Experiment 2 explores the effect of category training on chromatic thresholds, whether training of this nature is category specific and whether it can induce categorical responding. Experiment 3 investigates the effect of category training on a higher level, lateralized target detection task, previously found to be sensitive to category effects. The findings indicate that performance on a perceptual threshold task improves following training but improvements do not transfer across retinal location or hue. Therefore, chromatic perceptual learning is category specific and can occur at relatively early stages of visual analysis. Additionally, category training does not induce category effects on a low level perceptual threshold task, as indicated by comparable discrimination thresholds at the newly learned hue boundary and adjacent test points. However, category training does induce emerging category effects on a supra-threshold target detection task. Whilst chromatic perceptual learning is possible, learnt category effects appear to be a product of left hemisphere processing, and may require the input of higher level linguistic coding processes in order to manifest.

  1. Structured perceptual input imposes an egocentric frame of reference-pointing, imagery, and spatial self-consciousness.

    PubMed

    Marcel, Anthony; Dobel, Christian

    2005-01-01

    Perceptual input imposes and maintains an egocentric frame of reference, which enables orientation. When blindfolded, people tended to mistake the assumed intrinsic axes of symmetry of their immediate environment (a room) for their own egocentric relation to features of the room. When asked to point to the door and window, known to be at mid-points of facing (or adjacent) walls, they pointed with their arms at 180 degrees (or 90 degrees) angles, irrespective of where they thought they were in the room. People did the same when requested to imagine the situation. They justified their responses (inappropriately) by logical necessity or a structural description of the room rather than (appropriately) by relative location of themselves and the reference points. In eight experiments, we explored the effect on this in perception and imagery of: perceptual input (without perceptibility of the target reference points); imaging oneself versus another person; aids to explicit spatial self-consciousness; order of questions about self-location; and the relation of targets to the axes of symmetry of the room. The results indicate that, if one is deprived of structured perceptual input, as well as losing one's bearings, (a) one is likely to lose one's egocentric frame of reference itself, and (b) instead of pointing to reference points, one demonstrates their structural relation by adopting the intrinsic axes of the environment as one's own. This is prevented by providing noninformative perceptual input or by inducing subjects to imagine themselves from the outside, which makes explicit the fact of their being located relative to the world. The role of perceptual contact with a structured world is discussed in relation to sensory deprivation and imagery, appeal is made to Gibson's theory of joint egoreception and exteroception, and the data are related to recent theories of spatial memory and navigation.

  2. Evidence for Working Memory Storage Operations in Perceptual Cortex

    PubMed Central

    Sreenivasan, Kartik K.; Gratton, Caterina; Vytlacil, Jason; D’Esposito, Mark

    2014-01-01

    Isolating the short-term storage component of working memory (WM) from the myriad of associated executive processes has been an enduring challenge. Recent efforts have identified patterns of activity in visual regions that contain information about items being held in WM. However, it remains unclear (i) whether these representations withstand intervening sensory input and (ii) how communication between multimodal association cortex and unimodal perceptual regions supporting WM representations is involved in WM storage. We present evidence that the features of a face held in WM are stored within face processing regions, that these representations persist across subsequent sensory input, and that information about the match between sensory input and memory representation is relayed forward from perceptual to prefrontal regions. Participants were presented with a series of probe faces and indicated whether each probe matched a Target face held in WM. We parametrically varied the feature similarity between probe and Target faces. Activity within face processing regions scaled linearly with the degree of feature similarity between the probe face and the features of the Target face, suggesting that the features of the Target face were stored in these regions. Furthermore, directed connectivity measures revealed that the direction of information flow that was optimal for performance was from sensory regions that stored the features of the Target face to dorsal prefrontal regions, supporting the notion that sensory input is compared to representations stored within perceptual regions and relayed forward. Together, these findings indicate that WM storage operations are carried out within perceptual cortex. PMID:24436009

  3. Perceptual Mapping Software as a Tool for Facilitating School-Based Consultation

    ERIC Educational Resources Information Center

    Rush, S. Craig; Kalish, Ashley; Wheeler, Joanna

    2013-01-01

    Perceptual mapping is a systematic method for collecting, analyzing, and presenting group perceptions that is potentially useful in consultation. With input and feedback from a consultee group, perceptual mapping allows the consultant to capture the group's collective perceptions and display them as an organized image that may foster…

  4. Where do we store the memory representations that guide attention?

    PubMed Central

    Woodman, Geoffrey F.; Carlisle, Nancy B.; Reinhart, Robert M. G.

    2013-01-01

    During the last decade one of the most contentious and heavily studied topics in the attention literature has been the role that working memory representations play in controlling perceptual selection. The hypothesis has been advanced that to have attention select a certain perceptual input from the environment, we only need to represent that item in working memory. Here we summarize the work indicating that the relationship between what representations are maintained in working memory and what perceptual inputs are selected is not so simple. First, it appears that attentional selection is also determined by high-level task goals that mediate the relationship between working memory storage and attentional selection. Second, much of the recent work from our laboratory has focused on the role of long-term memory in controlling attentional selection. We review recent evidence supporting the proposal that working memory representations are critical during the initial configuration of attentional control settings, but that after those settings are established long-term memory representations play an important role in controlling which perceptual inputs are selected by mechanisms of attention. PMID:23444390

  5. Chromatic Perceptual Learning but No Category Effects without Linguistic Input

    PubMed Central

    Grandison, Alexandra; Sowden, Paul T.; Drivonikou, Vicky G.; Notman, Leslie A.; Alexander, Iona; Davies, Ian R. L.

    2016-01-01

    Perceptual learning involves an improvement in perceptual judgment with practice, which is often specific to stimulus or task factors. Perceptual learning has been shown on a range of visual tasks but very little research has explored chromatic perceptual learning. Here, we use two low level perceptual threshold tasks and a supra-threshold target detection task to assess chromatic perceptual learning and category effects. Experiment 1 investigates whether chromatic thresholds reduce as a result of training and at what level of analysis learning effects occur. Experiment 2 explores the effect of category training on chromatic thresholds, whether training of this nature is category specific and whether it can induce categorical responding. Experiment 3 investigates the effect of category training on a higher level, lateralized target detection task, previously found to be sensitive to category effects. The findings indicate that performance on a perceptual threshold task improves following training but improvements do not transfer across retinal location or hue. Therefore, chromatic perceptual learning is category specific and can occur at relatively early stages of visual analysis. Additionally, category training does not induce category effects on a low level perceptual threshold task, as indicated by comparable discrimination thresholds at the newly learned hue boundary and adjacent test points. However, category training does induce emerging category effects on a supra-threshold target detection task. Whilst chromatic perceptual learning is possible, learnt category effects appear to be a product of left hemisphere processing, and may require the input of higher level linguistic coding processes in order to manifest. PMID:27252669

  6. Distant from input: Evidence of regions within the default mode network supporting perceptually-decoupled and conceptually-guided cognition.

    PubMed

    Murphy, Charlotte; Jefferies, Elizabeth; Rueschemeyer, Shirley-Ann; Sormaz, Mladen; Wang, Hao-Ting; Margulies, Daniel S; Smallwood, Jonathan

    2018-05-01

    The default mode network supports a variety of mental operations such as semantic processing, episodic memory retrieval, mental time travel and mind-wandering, yet the commonalities between these functions remains unclear. One possibility is that this system supports cognition that is independent of the immediate environment; alternatively or additionally, it might support higher-order conceptual representations that draw together multiple features. We tested these accounts using a novel paradigm that separately manipulated the availability of perceptual information to guide decision-making and the representational complexity of this information. Using task based imaging we established regions that respond when cognition combines both stimulus independence with multi-modal information. These included left and right angular gyri and the left middle temporal gyrus. Although these sites were within the default mode network, they showed a stronger response to demanding memory judgements than to an easier perceptual task, contrary to the view that they support automatic aspects of cognition. In a subsequent analysis, we showed that these regions were located at the extreme end of a macroscale gradient, which describes gradual transitions from sensorimotor to transmodal cortex. This shift in the focus of neural activity towards transmodal, default mode, regions might reflect a process of where the functional distance from specific sensory enables conceptually rich and detailed cognitive states to be generated in the absence of input. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  7. The cognitive demands of second order manual control: Applications of the event related brain potential

    NASA Technical Reports Server (NTRS)

    Wickens, C.; Gill, R.; Kramer, A.; Ross, W.; Donchin, E.

    1981-01-01

    Three experiments are described in which tracking difficulty is varied in the presence of a covert tone discrimination task. Event related brain potentials (ERPs) elicited by the tones are employed as an index of the resource demands of tracking. The ERP measure reflected the control order variation, and this variable was thereby assumed to compete for perceptual/central processing resources. A fine-grained analysis of the results suggested that the primary demands of second order tracking involve the central processing operations of maintaining a more complex internal model of the dynamic system, rather than the perceptual demands of higher derivative perception. Experiment 3 varied tracking bandwidth in random input tracking, and the ERP was unaffected. Bandwidth was then inferred to compete for response-related processing resources that are independent of the ERP.

  8. Concept cells through associative learning of high-level representations.

    PubMed

    Reddy, Leila; Thorpe, Simon J

    2014-10-22

    In this issue of Neuron, Quian Quiroga et al. (2014) show that neurons in the human medial temporal lobe (MTL) follow subjects' perceptual states rather than the features of the visual input. Patients with MTL damage however have intact perceptual abilities but suffer instead from extreme forgetfulness. Thus, the reported MTL neurons could create new memories of the current perceptual state.

  9. A new analytical method for characterizing nonlinear visual processes with stimuli of arbitrary distribution: Theory and applications.

    PubMed

    Hayashi, Ryusuke; Watanabe, Osamu; Yokoyama, Hiroki; Nishida, Shin'ya

    2017-06-01

    Characterization of the functional relationship between sensory inputs and neuronal or observers' perceptual responses is one of the fundamental goals of systems neuroscience and psychophysics. Conventional methods, such as reverse correlation and spike-triggered data analyses are limited in their ability to resolve complex and inherently nonlinear neuronal/perceptual processes because these methods require input stimuli to be Gaussian with a zero mean. Recent studies have shown that analyses based on a generalized linear model (GLM) do not require such specific input characteristics and have advantages over conventional methods. GLM, however, relies on iterative optimization algorithms and its calculation costs become very expensive when estimating the nonlinear parameters of a large-scale system using large volumes of data. In this paper, we introduce a new analytical method for identifying a nonlinear system without relying on iterative calculations and yet also not requiring any specific stimulus distribution. We demonstrate the results of numerical simulations, showing that our noniterative method is as accurate as GLM in estimating nonlinear parameters in many cases and outperforms conventional, spike-triggered data analyses. As an example of the application of our method to actual psychophysical data, we investigated how different spatiotemporal frequency channels interact in assessments of motion direction. The nonlinear interaction estimated by our method was consistent with findings from previous vision studies and supports the validity of our method for nonlinear system identification.

  10. Changing What You See by Changing What You Know: The Role of Attention

    PubMed Central

    Lupyan, Gary

    2017-01-01

    Attending is a cognitive process that incorporates a person’s knowledge, goals, and expectations. What we perceive when we attend to one thing is different from what we perceive when we attend to something else. Yet, it is often argued that attentional effects do not count as evidence that perception is influenced by cognition. I investigate two arguments often given to justify excluding attention. The first is arguing that attention is a post-perceptual process reflecting selection between fully constructed perceptual representations. The second is arguing that attention as a pre-perceptual process that simply changes the input to encapsulated perceptual systems. Both of these arguments are highly problematic. Although some attentional effects can indeed be construed as post-perceptual, others operate by changing perceptual content across the entire visual hierarchy. Although there is a natural analogy between spatial attention and a change of input, the analogy falls apart when we consider other forms of attention. After dispelling these arguments, I make a case for thinking of attention not as a confound, but as one of the mechanisms by which cognitive states affect perception by going through cases in which the same or similar visual inputs are perceived differently depending on the observer’s cognitive state, and instances where cuing an observer using language affects what one sees. Lastly, I provide two compelling counter-examples to the critique that although cognitive influences on perception can be demonstrated in the laboratory, it is impossible to really experience them for oneself in a phenomenologically compelling way. Taken together, the current evidence strongly supports the thesis that what we know routinely influences what we see, that the same sensory input can be perceived differently depending on the current cognitive state of the viewer, and that phenomenologically salient demonstrations are possible if certain conditions are met. PMID:28507524

  11. Infant perceptual development for faces and spoken words: An integrated approach

    PubMed Central

    Watson, Tamara L; Robbins, Rachel A; Best, Catherine T

    2014-01-01

    There are obvious differences between recognizing faces and recognizing spoken words or phonemes that might suggest development of each capability requires different skills. Recognizing faces and perceiving spoken language, however, are in key senses extremely similar endeavors. Both perceptual processes are based on richly variable, yet highly structured input from which the perceiver needs to extract categorically meaningful information. This similarity could be reflected in the perceptual narrowing that occurs within the first year of life in both domains. We take the position that the perceptual and neurocognitive processes by which face and speech recognition develop are based on a set of common principles. One common principle is the importance of systematic variability in the input as a source of information rather than noise. Experience of this variability leads to perceptual tuning to the critical properties that define individual faces or spoken words versus their membership in larger groupings of people and their language communities. We argue that parallels can be drawn directly between the principles responsible for the development of face and spoken language perception. PMID:25132626

  12. Do humans make good decisions?

    PubMed Central

    Summerfield, Christopher; Tsetsos, Konstantinos

    2014-01-01

    Human performance on perceptual classification tasks approaches that of an ideal observer, but economic decisions are often inconsistent and intransitive, with preferences reversing according to the local context. We discuss the view that suboptimal choices may result from the efficient coding of decision-relevant information, a strategy that allows expected inputs to be processed with higher gain than unexpected inputs. Efficient coding leads to ‘robust’ decisions that depart from optimality but maximise the information transmitted by a limited-capacity system in a rapidly-changing world. We review recent work showing that when perceptual environments are variable or volatile, perceptual decisions exhibit the same suboptimal context-dependence as economic choices, and propose a general computational framework that accounts for findings across the two domains. PMID:25488076

  13. Visual Perceptual Learning and Models.

    PubMed

    Dosher, Barbara; Lu, Zhong-Lin

    2017-09-15

    Visual perceptual learning through practice or training can significantly improve performance on visual tasks. Originally seen as a manifestation of plasticity in the primary visual cortex, perceptual learning is more readily understood as improvements in the function of brain networks that integrate processes, including sensory representations, decision, attention, and reward, and balance plasticity with system stability. This review considers the primary phenomena of perceptual learning, theories of perceptual learning, and perceptual learning's effect on signal and noise in visual processing and decision. Models, especially computational models, play a key role in behavioral and physiological investigations of the mechanisms of perceptual learning and for understanding, predicting, and optimizing human perceptual processes, learning, and performance. Performance improvements resulting from reweighting or readout of sensory inputs to decision provide a strong theoretical framework for interpreting perceptual learning and transfer that may prove useful in optimizing learning in real-world applications.

  14. Interobject grouping facilitates visual awareness.

    PubMed

    Stein, Timo; Kaiser, Daniel; Peelen, Marius V

    2015-01-01

    In organizing perception, the human visual system takes advantage of regularities in the visual input to perceptually group related image elements. Simple stimuli that can be perceptually grouped based on physical regularities, for example by forming an illusory contour, have a competitive advantage in entering visual awareness. Here, we show that regularities that arise from the relative positioning of complex, meaningful objects in the visual environment also modulate visual awareness. Using continuous flash suppression, we found that pairs of objects that were positioned according to real-world spatial regularities (e.g., a lamp above a table) accessed awareness more quickly than the same object pairs shown in irregular configurations (e.g., a table above a lamp). This advantage was specific to upright stimuli and abolished by stimulus inversion, meaning that it did not reflect physical stimulus confounds or the grouping of simple image elements. Thus, knowledge of the spatial configuration of objects in the environment shapes the contents of conscious perception.

  15. Network model of top-down influences on local gain and contextual interactions in visual cortex.

    PubMed

    Piëch, Valentin; Li, Wu; Reeke, George N; Gilbert, Charles D

    2013-10-22

    The visual system uses continuity as a cue for grouping oriented line segments that define object boundaries in complex visual scenes. Many studies support the idea that long-range intrinsic horizontal connections in early visual cortex contribute to this grouping. Top-down influences in primary visual cortex (V1) play an important role in the processes of contour integration and perceptual saliency, with contour-related responses being task dependent. This suggests an interaction between recurrent inputs to V1 and intrinsic connections within V1 that enables V1 neurons to respond differently under different conditions. We created a network model that simulates parametrically the control of local gain by hypothetical top-down modification of local recurrence. These local gain changes, as a consequence of network dynamics in our model, enable modulation of contextual interactions in a task-dependent manner. Our model displays contour-related facilitation of neuronal responses and differential foreground vs. background responses over the neuronal ensemble, accounting for the perceptual pop-out of salient contours. It quantitatively reproduces the results of single-unit recording experiments in V1, highlighting salient contours and replicating the time course of contextual influences. We show by means of phase-plane analysis that the model operates stably even in the presence of large inputs. Our model shows how a simple form of top-down modulation of the effective connectivity of intrinsic cortical connections among biophysically realistic neurons can account for some of the response changes seen in perceptual learning and task switching.

  16. Cortical Surround Interactions and Perceptual Salience via Natural Scene Statistics

    PubMed Central

    Coen-Cagli, Ruben; Dayan, Peter; Schwartz, Odelia

    2012-01-01

    Spatial context in images induces perceptual phenomena associated with salience and modulates the responses of neurons in primary visual cortex (V1). However, the computational and ecological principles underlying contextual effects are incompletely understood. We introduce a model of natural images that includes grouping and segmentation of neighboring features based on their joint statistics, and we interpret the firing rates of V1 neurons as performing optimal recognition in this model. We show that this leads to a substantial generalization of divisive normalization, a computation that is ubiquitous in many neural areas and systems. A main novelty in our model is that the influence of the context on a target stimulus is determined by their degree of statistical dependence. We optimized the parameters of the model on natural image patches, and then simulated neural and perceptual responses on stimuli used in classical experiments. The model reproduces some rich and complex response patterns observed in V1, such as the contrast dependence, orientation tuning and spatial asymmetry of surround suppression, while also allowing for surround facilitation under conditions of weak stimulation. It also mimics the perceptual salience produced by simple displays, and leads to readily testable predictions. Our results provide a principled account of orientation-based contextual modulation in early vision and its sensitivity to the homogeneity and spatial arrangement of inputs, and lends statistical support to the theory that V1 computes visual salience. PMID:22396635

  17. A neural network model of causative actions.

    PubMed

    Lee-Hand, Jeremy; Knott, Alistair

    2015-01-01

    A common idea in models of action representation is that actions are represented in terms of their perceptual effects (see e.g., Prinz, 1997; Hommel et al., 2001; Sahin et al., 2007; Umiltà et al., 2008; Hommel, 2013). In this paper we extend existing models of effect-based action representations to account for a novel distinction. Some actions bring about effects that are independent events in their own right: for instance, if John smashes a cup, he brings about the event of the cup smashing. Other actions do not bring about such effects. For instance, if John grabs a cup, this action does not cause the cup to "do" anything: a grab action has well-defined perceptual effects, but these are not registered by the perceptual system that detects independent events involving external objects in the world. In our model, effect-based actions are implemented in several distinct neural circuits, which are organized into a hierarchy based on the complexity of their associated perceptual effects. The circuit at the top of this hierarchy is responsible for actions that bring about independently perceivable events. This circuit receives input from the perceptual module that recognizes arbitrary events taking place in the world, and learns movements that reliably cause such events. We assess our model against existing experimental observations about effect-based motor representations, and make some novel experimental predictions. We also consider the possibility that the "causative actions" circuit in our model can be identified with a motor pathway reported in other work, specializing in "functional" actions on manipulable tools (Bub et al., 2008; Binkofski and Buxbaum, 2013).

  18. Short-term plasticity as a neural mechanism supporting memory and attentional functions.

    PubMed

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Andermann, Mark L; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2011-11-08

    Based on behavioral studies, several relatively distinct perceptual and cognitive functions have been defined in cognitive psychology such as sensory memory, short-term memory, and selective attention. Here, we review evidence suggesting that some of these functions may be supported by shared underlying neuronal mechanisms. Specifically, we present, based on an integrative review of the literature, a hypothetical model wherein short-term plasticity, in the form of transient center-excitatory and surround-inhibitory modulations, constitutes a generic processing principle that supports sensory memory, short-term memory, involuntary attention, selective attention, and perceptual learning. In our model, the size and complexity of receptive fields/level of abstraction of neural representations, as well as the length of temporal receptive windows, increases as one steps up the cortical hierarchy. Consequently, the type of input (bottom-up vs. top down) and the level of cortical hierarchy that the inputs target, determine whether short-term plasticity supports purely sensory vs. semantic short-term memory or attentional functions. Furthermore, we suggest that rather than discrete memory systems, there are continuums of memory representations from short-lived sensory ones to more abstract longer-duration representations, such as those tapped by behavioral studies of short-term memory. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Beta oscillations define discrete perceptual cycles in the somatosensory domain.

    PubMed

    Baumgarten, Thomas J; Schnitzler, Alfons; Lange, Joachim

    2015-09-29

    Whether seeing a movie, listening to a song, or feeling a breeze on the skin, we coherently experience these stimuli as continuous, seamless percepts. However, there are rare perceptual phenomena that argue against continuous perception but, instead, suggest discrete processing of sensory input. Empirical evidence supporting such a discrete mechanism, however, remains scarce and comes entirely from the visual domain. Here, we demonstrate compelling evidence for discrete perceptual sampling in the somatosensory domain. Using magnetoencephalography (MEG) and a tactile temporal discrimination task in humans, we find that oscillatory alpha- and low beta-band (8-20 Hz) cycles in primary somatosensory cortex represent neurophysiological correlates of discrete perceptual cycles. Our results agree with several theoretical concepts of discrete perceptual sampling and empirical evidence of perceptual cycles in the visual domain. Critically, these results show that discrete perceptual cycles are not domain-specific, and thus restricted to the visual domain, but extend to the somatosensory domain.

  20. Perceptual organization in computer vision - A review and a proposal for a classificatory structure

    NASA Technical Reports Server (NTRS)

    Sarkar, Sudeep; Boyer, Kim L.

    1993-01-01

    The evolution of perceptual organization in biological vision, and its necessity in advanced computer vision systems, arises from the characteristic that perception, the extraction of meaning from sensory input, is an intelligent process. This is particularly so for high order organisms and, analogically, for more sophisticated computational models. The role of perceptual organization in computer vision systems is explored. This is done from four vantage points. First, a brief history of perceptual organization research in both humans and computer vision is offered. Next, a classificatory structure in which to cast perceptual organization research to clarify both the nomenclature and the relationships among the many contributions is proposed. Thirdly, the perceptual organization work in computer vision in the context of this classificatory structure is reviewed. Finally, the array of computational techniques applied to perceptual organization problems in computer vision is surveyed.

  1. Natural images dominate in binocular rivalry

    PubMed Central

    Baker, Daniel H.; Graf, Erich W.

    2009-01-01

    Ecological approaches to perception have demonstrated that information encoding by the visual system is informed by the natural environment, both in terms of simple image attributes like luminance and contrast, and more complex relationships corresponding to Gestalt principles of perceptual organization. Here, we ask if this optimization biases perception of visual inputs that are perceptually bistable. Using the binocular rivalry paradigm, we designed stimuli that varied in either their spatiotemporal amplitude spectra or their phase spectra. We found that noise stimuli with “natural” amplitude spectra (i.e., amplitude content proportional to 1/f, where f is spatial or temporal frequency) dominate over those with any other systematic spectral slope, along both spatial and temporal dimensions. This could not be explained by perceived contrast measurements, and occurred even though all stimuli had equal energy. Calculating the effective contrast following attenuation by a model contrast sensitivity function suggested that the strong contrast dependency of rivalry provides the mechanism by which binocular vision is optimized for viewing natural images. We also compared rivalry between natural and phase-scrambled images and found a strong preference for natural phase spectra that could not be accounted for by observer biases in a control task. We propose that this phase specificity relates to contour information, and arises either from the activity of V1 complex cells, or from later visual areas, consistent with recent neuroimaging and single-cell work. Our findings demonstrate that human vision integrates information across space, time, and phase to select the input most likely to hold behavioral relevance. PMID:19289828

  2. A different view on the Necker cube—Differences in multistable perception dynamics between Asperger and non-Asperger observers

    PubMed Central

    Wörner, Rike

    2017-01-01

    Background During observation of the Necker cube perception becomes unstable and alternates repeatedly between a from-above-perspective (“fap”) and a from-below-perspective (“fbp”) interpretation. Both interpretations are physically equally plausible, however, observers usually show an a priori top-down bias in favor of the fap interpretation. Patients with Autism spectrum disorder are known to show an altered pattern of perception with a focus on sensory details. In the present study we tested whether this altered perceptual processing affects their reversal dynamics and reduces the perceptual bias during Necker cube observation. Methods 19 participants with Asperger syndrome and 16 healthy controls observed a Necker cube stimulus continuously for 5 minutes and indicated perceptual reversals by key press. We compared reversal rates (number of reversals per minute) and the distributions of dwell times for the two interpretations between observer groups. Results Asperger participants showed less perceptual reversal than controls. Six Asperger participants did not perceive any reversal at all, whereas all observers from the control group perceived at least five reversals within the five minutes observation time. Further, control participants showed the typical perceptual bias with significant longer median dwell times for the fap compared to the fbp interpretation. No such perceptual bias was found in the Asperger group. Discussion The perceptual system weights the incomplete and ambiguous sensory input with memorized concepts in order to construct stable and reliable percepts. In the case of the Necker cube stimulus, two perceptual interpretations are equally compatible with the sensory information and internal fluctuations may cause perceptual alternations between them—with a slightly larger probability value for the fap interpretation (perceptual bias). Smaller reversal rates in Asperger observers may result from the dominance of bottom-up sensory input over endogenous top-down factors. The latter may also explain the absence of a fap bias. PMID:29244813

  3. A different view on the Necker cube-Differences in multistable perception dynamics between Asperger and non-Asperger observers.

    PubMed

    Kornmeier, Jürgen; Wörner, Rike; Riedel, Andreas; Tebartz van Elst, Ludger

    2017-01-01

    During observation of the Necker cube perception becomes unstable and alternates repeatedly between a from-above-perspective ("fap") and a from-below-perspective ("fbp") interpretation. Both interpretations are physically equally plausible, however, observers usually show an a priori top-down bias in favor of the fap interpretation. Patients with Autism spectrum disorder are known to show an altered pattern of perception with a focus on sensory details. In the present study we tested whether this altered perceptual processing affects their reversal dynamics and reduces the perceptual bias during Necker cube observation. 19 participants with Asperger syndrome and 16 healthy controls observed a Necker cube stimulus continuously for 5 minutes and indicated perceptual reversals by key press. We compared reversal rates (number of reversals per minute) and the distributions of dwell times for the two interpretations between observer groups. Asperger participants showed less perceptual reversal than controls. Six Asperger participants did not perceive any reversal at all, whereas all observers from the control group perceived at least five reversals within the five minutes observation time. Further, control participants showed the typical perceptual bias with significant longer median dwell times for the fap compared to the fbp interpretation. No such perceptual bias was found in the Asperger group. The perceptual system weights the incomplete and ambiguous sensory input with memorized concepts in order to construct stable and reliable percepts. In the case of the Necker cube stimulus, two perceptual interpretations are equally compatible with the sensory information and internal fluctuations may cause perceptual alternations between them-with a slightly larger probability value for the fap interpretation (perceptual bias). Smaller reversal rates in Asperger observers may result from the dominance of bottom-up sensory input over endogenous top-down factors. The latter may also explain the absence of a fap bias.

  4. Perceptual Space of Superimposed Dual-Frequency Vibrations in the Hands.

    PubMed

    Hwang, Inwook; Seo, Jeongil; Choi, Seungmoon

    2017-01-01

    The use of distinguishable complex vibrations that have multiple spectral components can improve the transfer of information by vibrotactile interfaces. We investigated the qualitative characteristics of dual-frequency vibrations as the simplest complex vibrations compared to single-frequency vibrations. Two psychophysical experiments were conducted to elucidate the perceptual characteristics of these vibrations by measuring the perceptual distances among single-frequency and dual-frequency vibrations. The perceptual distances of dual-frequency vibrations between their two frequency components along their relative intensity ratio were measured in Experiment I. The estimated perceptual spaces for three frequency conditions showed non-linear perceptual differences between the dual-frequency and single-frequency vibrations. A perceptual space was estimated from the measured perceptual distances among ten dual-frequency compositions and five single-frequency vibrations in Experiment II. The effect of the component frequency and the frequency ratio was revealed in the perceptual space. In a percept of dual-frequency vibration, the lower frequency component showed a dominant effect. Additionally, the perceptual difference among single-frequency and dual-frequency vibrations were increased with a low relative difference between two frequencies of a dual-frequency vibration. These results are expected to provide a fundamental understanding about the perception of complex vibrations to enrich the transfer of information using vibrotactile stimuli.

  5. Acoustic and perceptual effects of magnifying interaural difference cues in a simulated "binaural" hearing aid.

    PubMed

    de Taillez, Tobias; Grimm, Giso; Kollmeier, Birger; Neher, Tobias

    2018-06-01

    To investigate the influence of an algorithm designed to enhance or magnify interaural difference cues on speech signals in noisy, spatially complex conditions using both technical and perceptual measurements. To also investigate the combination of interaural magnification (IM), monaural microphone directionality (DIR), and binaural coherence-based noise reduction (BC). Speech-in-noise stimuli were generated using virtual acoustics. A computational model of binaural hearing was used to analyse the spatial effects of IM. Predicted speech quality changes and signal-to-noise-ratio (SNR) improvements were also considered. Additionally, a listening test was carried out to assess speech intelligibility and quality. Listeners aged 65-79 years with and without sensorineural hearing loss (N = 10 each). IM increased the horizontal separation of concurrent directional sound sources without introducing any major artefacts. In situations with diffuse noise, however, the interaural difference cues were distorted. Preprocessing the binaural input signals with DIR reduced distortion. IM influenced neither speech intelligibility nor speech quality. The IM algorithm tested here failed to improve speech perception in noise, probably because of the dispersion and inconsistent magnification of interaural difference cues in complex environments.

  6. Temporal Sequences Quantify the Contributions of Individual Fixations in Complex Perceptual Matching Tasks

    ERIC Educational Resources Information Center

    Busey, Thomas; Yu, Chen; Wyatte, Dean; Vanderkolk, John

    2013-01-01

    Perceptual tasks such as object matching, mammogram interpretation, mental rotation, and satellite imagery change detection often require the assignment of correspondences to fuse information across views. We apply techniques developed for machine translation to the gaze data recorded from a complex perceptual matching task modeled after…

  7. Closed head injury and perceptual processing in dual-task situations.

    PubMed

    Hein, G; Schubert, T; von Cramon, D Y

    2005-01-01

    Using a classical psychological refractory period (PRP) paradigm we investigated whether increased interference between dual-task input processes is one possible source of dual-task deficits in patients with closed-head injury (CHI). Patients and age-matched controls were asked to give speeded motor reactions to an auditory and a visual stimulus. The perceptual difficulty of the visual stimulus was manipulated by varying its intensity. The results of Experiment 1 showed that CHI patients suffer from increased interference between dual-task input processes, which is related to the salience of the visual stimulus. A second experiment indicated that this input interference may be specific to brain damage following CHI. It is not evident in other groups of neurological patients like Parkinson's disease patients. We conclude that the non-interfering processing of input stages in dual-tasks requires cognitive control. A decline in the control of input processes should be considered as one source of dual-task deficits in CHI patients.

  8. Perceptual Learning via Modification of Cortical Top-Down Signals

    PubMed Central

    Schäfer, Roland; Vasilaki, Eleni; Senn, Walter

    2007-01-01

    The primary visual cortex (V1) is pre-wired to facilitate the extraction of behaviorally important visual features. Collinear edge detectors in V1, for instance, mutually enhance each other to improve the perception of lines against a noisy background. The same pre-wiring that facilitates line extraction, however, is detrimental when subjects have to discriminate the brightness of different line segments. How is it possible to improve in one task by unsupervised practicing, without getting worse in the other task? The classical view of perceptual learning is that practicing modulates the feedforward input stream through synaptic modifications onto or within V1. However, any rewiring of V1 would deteriorate other perceptual abilities different from the trained one. We propose a general neuronal model showing that perceptual learning can modulate top-down input to V1 in a task-specific way while feedforward and lateral pathways remain intact. Consistent with biological data, the model explains how context-dependent brightness discrimination is improved by a top-down recruitment of recurrent inhibition and a top-down induced increase of the neuronal gain within V1. Both the top-down modulation of inhibition and of neuronal gain are suggested to be universal features of cortical microcircuits which enable perceptual learning. PMID:17715996

  9. Issues in Perceptual Speech Analysis in Cleft Palate and Related Disorders: A Review

    ERIC Educational Resources Information Center

    Sell, Debbie

    2005-01-01

    Perceptual speech assessment is central to the evaluation of speech outcomes associated with cleft palate and velopharyngeal dysfunction. However, the complexity of this process is perhaps sometimes underestimated. To draw together the many different strands in the complex process of perceptual speech assessment and analysis, and make…

  10. Brief Report: Simulations Suggest Heterogeneous Category Learning and Generalization in Children with Autism is a Result of Idiosyncratic Perceptual Transformations.

    PubMed

    Mercado, Eduardo; Church, Barbara A

    2016-08-01

    Children with autism spectrum disorder (ASD) sometimes have difficulties learning categories. Past computational work suggests that such deficits may result from atypical representations in cortical maps. Here we use neural networks to show that idiosyncratic transformations of inputs can result in the formation of feature maps that impair category learning for some inputs, but not for other closely related inputs. These simulations suggest that large inter- and intra-individual variations in learning capacities shown by children with ASD across similar categorization tasks may similarly result from idiosyncratic perceptual encoding that is resistant to experience-dependent changes. If so, then both feedback- and exposure-based category learning should lead to heterogeneous, stimulus-dependent deficits in children with ASD.

  11. Program Predicts Time Courses of Human/Computer Interactions

    NASA Technical Reports Server (NTRS)

    Vera, Alonso; Howes, Andrew

    2005-01-01

    CPM X is a computer program that predicts sequences of, and amounts of time taken by, routine actions performed by a skilled person performing a task. Unlike programs that simulate the interaction of the person with the task environment, CPM X predicts the time course of events as consequences of encoded constraints on human behavior. The constraints determine which cognitive and environmental processes can occur simultaneously and which have sequential dependencies. The input to CPM X comprises (1) a description of a task and strategy in a hierarchical description language and (2) a description of architectural constraints in the form of rules governing interactions of fundamental cognitive, perceptual, and motor operations. The output of CPM X is a Program Evaluation Review Technique (PERT) chart that presents a schedule of predicted cognitive, motor, and perceptual operators interacting with a task environment. The CPM X program allows direct, a priori prediction of skilled user performance on complex human-machine systems, providing a way to assess critical interfaces before they are deployed in mission contexts.

  12. Object Correspondence across Brief Occlusion Is Established on the Basis of both Spatiotemporal and Surface Feature Cues

    ERIC Educational Resources Information Center

    Hollingworth, Andrew; Franconeri, Steven L.

    2009-01-01

    The "correspondence problem" is a classic issue in vision and cognition. Frequent perceptual disruptions, such as saccades and brief occlusion, create gaps in perceptual input. How does the visual system establish correspondence between objects visible before and after the disruption? Current theories hold that object correspondence is established…

  13. The Comparison of Visual Working Memory Representations with Perceptual Inputs

    ERIC Educational Resources Information Center

    Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew; Luck, Steven J.

    2009-01-01

    The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. In this study, the authors tested the hypothesis that differences between the memory of a stimulus array and the perception of a…

  14. Augmented Hebbian reweighting accounts for accuracy and induced bias in perceptual learning with reverse feedback

    PubMed Central

    Liu, Jiajuan; Dosher, Barbara Anne; Lu, Zhong-Lin

    2015-01-01

    Using an asymmetrical set of vernier stimuli (−15″, −10″, −5″, +10″, +15″) together with reverse feedback on the small subthreshold offset stimulus (−5″) induces response bias in performance (Aberg & Herzog, 2012; Herzog, Eward, Hermens, & Fahle, 2006; Herzog & Fahle, 1999). These conditions are of interest for testing models of perceptual learning because the world does not always present balanced stimulus frequencies or accurate feedback. Here we provide a comprehensive model for the complex set of asymmetric training results using the augmented Hebbian reweighting model (Liu, Dosher, & Lu, 2014; Petrov, Dosher, & Lu, 2005, 2006) and the multilocation integrated reweighting theory (Dosher, Jeter, Liu, & Lu, 2013). The augmented Hebbian learning algorithm incorporates trial-by-trial feedback, when present, as another input to the decision unit and uses the observer's internal response to update the weights otherwise; block feedback alters the weights on bias correction (Liu et al., 2014). Asymmetric training with reversed feedback incorporates biases into the weights between representation and decision. The model correctly predicts the basic induction effect, its dependence on trial-by-trial feedback, and the specificity of bias to stimulus orientation and spatial location, extending the range of augmented Hebbian reweighting accounts of perceptual learning. PMID:26418382

  15. Augmented Hebbian reweighting accounts for accuracy and induced bias in perceptual learning with reverse feedback.

    PubMed

    Liu, Jiajuan; Dosher, Barbara Anne; Lu, Zhong-Lin

    2015-01-01

    Using an asymmetrical set of vernier stimuli (-15″, -10″, -5″, +10″, +15″) together with reverse feedback on the small subthreshold offset stimulus (-5″) induces response bias in performance (Aberg & Herzog, 2012; Herzog, Eward, Hermens, & Fahle, 2006; Herzog & Fahle, 1999). These conditions are of interest for testing models of perceptual learning because the world does not always present balanced stimulus frequencies or accurate feedback. Here we provide a comprehensive model for the complex set of asymmetric training results using the augmented Hebbian reweighting model (Liu, Dosher, & Lu, 2014; Petrov, Dosher, & Lu, 2005, 2006) and the multilocation integrated reweighting theory (Dosher, Jeter, Liu, & Lu, 2013). The augmented Hebbian learning algorithm incorporates trial-by-trial feedback, when present, as another input to the decision unit and uses the observer's internal response to update the weights otherwise; block feedback alters the weights on bias correction (Liu et al., 2014). Asymmetric training with reversed feedback incorporates biases into the weights between representation and decision. The model correctly predicts the basic induction effect, its dependence on trial-by-trial feedback, and the specificity of bias to stimulus orientation and spatial location, extending the range of augmented Hebbian reweighting accounts of perceptual learning.

  16. Attention model of binocular rivalry

    PubMed Central

    Rankin, James; Rinzel, John; Carrasco, Marisa; Heeger, David J.

    2017-01-01

    When the corresponding retinal locations in the two eyes are presented with incompatible images, a stable percept gives way to perceptual alternations in which the two images compete for perceptual dominance. As perceptual experience evolves dynamically under constant external inputs, binocular rivalry has been used for studying intrinsic cortical computations and for understanding how the brain regulates competing inputs. Converging behavioral and EEG results have shown that binocular rivalry and attention are intertwined: binocular rivalry ceases when attention is diverted away from the rivalry stimuli. In addition, the competing image in one eye suppresses the target in the other eye through a pattern of gain changes similar to those induced by attention. These results require a revision of the current computational theories of binocular rivalry, in which the role of attention is ignored. Here, we provide a computational model of binocular rivalry. In the model, competition between two images in rivalry is driven by both attentional modulation and mutual inhibition, which have distinct selectivity (feature vs. eye of origin) and dynamics (relatively slow vs. relatively fast). The proposed model explains a wide range of phenomena reported in rivalry, including the three hallmarks: (i) binocular rivalry requires attention; (ii) various perceptual states emerge when the two images are swapped between the eyes multiple times per second; (iii) the dominance duration as a function of input strength follows Levelt’s propositions. With a bifurcation analysis, we identified the parameter space in which the model’s behavior was consistent with experimental results. PMID:28696323

  17. Predictive Coding or Evidence Accumulation? False Inference and Neuronal Fluctuations

    PubMed Central

    Friston, Karl J.; Kleinschmidt, Andreas

    2010-01-01

    Perceptual decisions can be made when sensory input affords an inference about what generated that input. Here, we report findings from two independent perceptual experiments conducted during functional magnetic resonance imaging (fMRI) with a sparse event-related design. The first experiment, in the visual modality, involved forced-choice discrimination of coherence in random dot kinematograms that contained either subliminal or periliminal motion coherence. The second experiment, in the auditory domain, involved free response detection of (non-semantic) near-threshold acoustic stimuli. We analysed fluctuations in ongoing neural activity, as indexed by fMRI, and found that neuronal activity in sensory areas (extrastriate visual and early auditory cortex) biases perceptual decisions towards correct inference and not towards a specific percept. Hits (detection of near-threshold stimuli) were preceded by significantly higher activity than both misses of identical stimuli or false alarms, in which percepts arise in the absence of appropriate sensory input. In accord with predictive coding models and the free-energy principle, this observation suggests that cortical activity in sensory brain areas reflects the precision of prediction errors and not just the sensory evidence or prediction errors per se. PMID:20369004

  18. Perceptual load in sport and the heuristic value of the perceptual load paradigm in examining expertise-related perceptual-cognitive adaptations.

    PubMed

    Furley, Philip; Memmert, Daniel; Schmid, Simone

    2013-03-01

    In two experiments, we transferred perceptual load theory to the dynamic field of team sports and tested the predictions derived from the theory using a novel task and stimuli. We tested a group of college students (N = 33) and a group of expert team sport players (N = 32) on a general perceptual load task and a complex, soccer-specific perceptual load task in order to extend the understanding of the applicability of perceptual load theory and further investigate whether distractor interference may differ between the groups, as the sport-specific processing task may not exhaust the processing capacity of the expert participants. In both, the general and the specific task, the pattern of results supported perceptual load theory and demonstrates that the predictions of the theory also transfer to more complex, unstructured situations. Further, perceptual load was the only determinant of distractor processing, as we neither found expertise effects in the general perceptual load task nor the sport-specific task. We discuss the heuristic utility of using response-competition paradigms for studying both general and domain-specific perceptual-cognitive adaptations.

  19. Assessing the Neural Basis of Uncertainty in Perceptual Category Learning through Varying Levels of Distortion

    ERIC Educational Resources Information Center

    Daniel, Reka; Wagner, Gerd; Koch, Kathrin; Reichenbach, Jurgen R.; Sauer, Heinrich; Schlosser, Ralf G. M.

    2011-01-01

    The formation of new perceptual categories involves learning to extract that information from a wide range of often noisy sensory inputs, which is critical for selecting between a limited number of responses. To identify brain regions involved in visual classification learning under noisy conditions, we developed a task on the basis of the…

  20. Training-Induced Recovery of Low-Level Vision Followed by Mid-Level Perceptual Improvements in Developmental Object and Face Agnosia

    ERIC Educational Resources Information Center

    Lev, Maria; Gilaie-Dotan, Sharon; Gotthilf-Nezri, Dana; Yehezkel, Oren; Brooks, Joseph L.; Perry, Anat; Bentin, Shlomo; Bonneh, Yoram; Polat, Uri

    2015-01-01

    Long-term deprivation of normal visual inputs can cause perceptual impairments at various levels of visual function, from basic visual acuity deficits, through mid-level deficits such as contour integration and motion coherence, to high-level face and object agnosia. Yet it is unclear whether training during adulthood, at a post-developmental…

  1. Enhanced perceptual functioning in autism: an update, and eight principles of autistic perception.

    PubMed

    Mottron, Laurent; Dawson, Michelle; Soulières, Isabelle; Hubert, Benedicte; Burack, Jake

    2006-01-01

    We propose an "Enhanced Perceptual Functioning" model encompassing the main differences between autistic and non-autistic social and non-social perceptual processing: locally oriented visual and auditory perception, enhanced low-level discrimination, use of a more posterior network in "complex" visual tasks, enhanced perception of first order static stimuli, diminished perception of complex movement, autonomy of low-level information processing toward higher-order operations, and differential relation between perception and general intelligence. Increased perceptual expertise may be implicated in the choice of special ability in savant autistics, and in the variability of apparent presentations within PDD (autism with and without typical speech, Asperger syndrome) in non-savant autistics. The overfunctioning of brain regions typically involved in primary perceptual functions may explain the autistic perceptual endophenotype.

  2. Is Statistical Learning Constrained by Lower Level Perceptual Organization?

    PubMed Central

    Emberson, Lauren L.; Liu, Ran; Zevin, Jason D.

    2013-01-01

    In order for statistical information to aid in complex developmental processes such as language acquisition, learning from higher-order statistics (e.g. across successive syllables in a speech stream to support segmentation) must be possible while perceptual abilities (e.g. speech categorization) are still developing. The current study examines how perceptual organization interacts with statistical learning. Adult participants were presented with multiple exemplars from novel, complex sound categories designed to reflect some of the spectral complexity and variability of speech. These categories were organized into sequential pairs and presented such that higher-order statistics, defined based on sound categories, could support stream segmentation. Perceptual similarity judgments and multi-dimensional scaling revealed that participants only perceived three perceptual clusters of sounds and thus did not distinguish the four experimenter-defined categories, creating a tension between lower level perceptual organization and higher-order statistical information. We examined whether the resulting pattern of learning is more consistent with statistical learning being “bottom-up,” constrained by the lower levels of organization, or “top-down,” such that higher-order statistical information of the stimulus stream takes priority over the perceptual organization, and perhaps influences perceptual organization. We consistently find evidence that learning is constrained by perceptual organization. Moreover, participants generalize their learning to novel sounds that occupy a similar perceptual space, suggesting that statistical learning occurs based on regions of or clusters in perceptual space. Overall, these results reveal a constraint on learning of sound sequences, such that statistical information is determined based on lower level organization. These findings have important implications for the role of statistical learning in language acquisition. PMID:23618755

  3. Biased and unbiased perceptual decision-making on vocal emotions.

    PubMed

    Dricu, Mihai; Ceravolo, Leonardo; Grandjean, Didier; Frühholz, Sascha

    2017-11-24

    Perceptual decision-making on emotions involves gathering sensory information about the affective state of another person and forming a decision on the likelihood of a particular state. These perceptual decisions can be of varying complexity as determined by different contexts. We used functional magnetic resonance imaging and a region of interest approach to investigate the brain activation and functional connectivity behind two forms of perceptual decision-making. More complex unbiased decisions on affective voices recruited an extended bilateral network consisting of the posterior inferior frontal cortex, the orbitofrontal cortex, the amygdala, and voice-sensitive areas in the auditory cortex. Less complex biased decisions on affective voices distinctly recruited the right mid inferior frontal cortex, pointing to a functional distinction in this region following decisional requirements. Furthermore, task-induced neural connectivity revealed stronger connections between these frontal, auditory, and limbic regions during unbiased relative to biased decision-making on affective voices. Together, the data shows that different types of perceptual decision-making on auditory emotions have distinct patterns of activations and functional coupling that follow the decisional strategies and cognitive mechanisms involved during these perceptual decisions.

  4. Human visual perceptual organization beats thinking on speed.

    PubMed

    van der Helm, Peter A

    2017-05-01

    What is the degree to which knowledge influences visual perceptual processes? This question, which is central to the seeing-versus-thinking debate in cognitive science, is often discussed using examples claimed to be proof of one stance or another. It has, however, also been muddled by the usage of different and unclear definitions of perception. Here, for the well-defined process of perceptual organization, I argue that including speed (or efficiency) into the equation opens a new perspective on the limits of top-down influences of thinking on seeing. While the input of the perceptual organization process may be modifiable and its output enrichable, the process itself seems so fast (or efficient) that thinking hardly has time to intrude and is effective mostly after the fact.

  5. The plastic ear and perceptual relearning in auditory spatial perception

    PubMed Central

    Carlile, Simon

    2014-01-01

    The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10–60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5–10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis. PMID:25147497

  6. Perceptual learning improves visual performance in juvenile amblyopia.

    PubMed

    Li, Roger W; Young, Karen G; Hoenig, Pia; Levi, Dennis M

    2005-09-01

    To determine whether practicing a position-discrimination task improves visual performance in children with amblyopia and to determine the mechanism(s) of improvement. Five children (age range, 7-10 years) with amblyopia practiced a positional acuity task in which they had to judge which of three pairs of lines was misaligned. Positional noise was produced by distributing the individual patches of each line segment according to a Gaussian probability function. Observers were trained at three noise levels (including 0), with each observer performing between 3000 and 4000 responses in 7 to 10 sessions. Trial-by-trial feedback was provided. Four of the five observers showed significant improvement in positional acuity. In those four observers, on average, positional acuity with no noise improved by approximately 32% and with high noise by approximately 26%. A position-averaging model was used to parse the improvement into an increase in efficiency or a decrease in equivalent input noise. Two observers showed increased efficiency (51% and 117% improvements) with no significant change in equivalent input noise across sessions. The other two observers showed both a decrease in equivalent input noise (18% and 29%) and an increase in efficiency (17% and 71%). All five observers showed substantial improvement in Snellen acuity (approximately 26%) after practice. Perceptual learning can improve visual performance in amblyopic children. The improvement can be parsed into two important factors: decreased equivalent input noise and increased efficiency. Perceptual learning techniques may add an effective new method to the armamentarium of amblyopia treatments.

  7. Data-Driven Modeling and Rendering of Force Responses from Elastic Tool Deformation

    PubMed Central

    Rakhmatov, Ruslan; Ogay, Tatyana; Jeon, Seokhee

    2018-01-01

    This article presents a new data-driven model design for rendering force responses from elastic tool deformation. The new design incorporates a six-dimensional input describing the initial position of the contact, as well as the state of the tool deformation. The input-output relationship of the model was represented by a radial basis functions network, which was optimized based on training data collected from real tool-surface contact. Since the input space of the model is represented in the local coordinate system of a tool, the model is independent of recording and rendering devices and can be easily deployed to an existing simulator. The model also supports complex interactions, such as self and multi-contact collisions. In order to assess the proposed data-driven model, we built a custom data acquisition setup and developed a proof-of-concept rendering simulator. The simulator was evaluated through numerical and psychophysical experiments with four different real tools. The numerical evaluation demonstrated the perceptual soundness of the proposed model, meanwhile the user study revealed the force feedback of the proposed simulator to be realistic. PMID:29342964

  8. The power of liking: Highly sensitive aesthetic processing for guiding us through the world

    PubMed Central

    Faerber, Stella J.; Carbon, Claus-Christian

    2012-01-01

    Assessing liking is one of the most intriguing and influencing types of processing we experience day by day. We can decide almost instantaneously what we like and are highly consistent in our assessments, even across cultures. Still, the underlying mechanism is not well understood and often neglected by vision scientists. Several potential predictors for liking are discussed in the literature, among them very prominently typicality. Here, we analysed the impact of subtle changes of two perceptual dimensions (shape and colour saturation) of three-dimensional models of chairs on typicality and liking. To increase the validity of testing, we utilized a test-adaptation–retest design for extracting sensitivity data of both variables from a static (test only) as well as from a dynamic perspective (test–retest). We showed that typicality was only influenced by shape properties, whereas liking combined processing of shape plus saturation properties, indicating more complex and integrative processing. Processing the aesthetic value of objects, persons, or scenes is an essential and sophisticated mechanism, which seems to be highly sensitive to the slightest variations of perceptual input. PMID:23145310

  9. Native sound category formation in simultaneous bilingual acquisition

    NASA Astrophysics Data System (ADS)

    Bosch, Laura

    2004-05-01

    The consequences of early bilingual exposure on the perceptual reorganization processes that occur by the end of the first year of life were analyzed in a series of experiments on the capacity to discriminate vowel and consonant contrasts, comparing monolingual and bilingual infants (Catalan/Spanish) at different age levels. For bilingual infants, the discrimination of target vowel contrasts, which reflect different amount of overlapping and acoustic distance between the two languages of exposure, suggested a U-shaped developmental pattern. A similar trend was observed in the bilingual infants discrimination of a fricative voicing contrast, present in only one of the languages in their environment. The temporary decline in sensitivity found at 8 months for vowel targets and at 12 months for the voicing contrast reveals the specific perceptual processes that bilingual infants develop in order to deal with their complex linguistic input. Data from adult bilingual subjects on a lexical decision task involving these contrasts add to this developmental picture and suggest the existence of a dominant language even in simultaneous bilingual acquisition. [Work supported by JSMF 10001079BMB.

  10. Perceptual grouping enhances visual plasticity.

    PubMed

    Mastropasqua, Tommaso; Turatto, Massimo

    2013-01-01

    Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity.

  11. Development of Attentional Control of Verbal Auditory Perception from Middle to Late Childhood: Comparisons to Healthy Aging

    ERIC Educational Resources Information Center

    Passow, Susanne; Müller, Maike; Westerhausen, René; Hugdahl, Kenneth; Wartenburger, Isabell; Heekeren, Hauke R.; Lindenberger, Ulman; Li, Shu-Chen

    2013-01-01

    Multitalker situations confront listeners with a plethora of competing auditory inputs, and hence require selective attention to relevant information, especially when the perceptual saliency of distracting inputs is high. This study augmented the classical forced-attention dichotic listening paradigm by adding an interaural intensity manipulation…

  12. A crossmodal crossover: opposite effects of visual and auditory perceptual load on steady-state evoked potentials to irrelevant visual stimuli.

    PubMed

    Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B

    2012-07-16

    Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Perceptual Visual Grouping under Inattention: Electrophysiological Functional Imaging

    ERIC Educational Resources Information Center

    Razpurker-Apfeld, Irene; Pratt, Hillel

    2008-01-01

    Two types of perceptual visual grouping, differing in complexity of shape formation, were examined under inattention. Fourteen participants performed a similarity judgment task concerning two successive briefly presented central targets surrounded by task-irrelevant simple and complex grouping patterns. Event-related potentials (ERPs) were…

  14. Linking Cognitive and Visual Perceptual Decline in Healthy Aging: The Information Degradation Hypothesis

    PubMed Central

    Monge, Zachary A.; Madden, David J.

    2016-01-01

    Several hypotheses attempt to explain the relation between cognitive and perceptual decline in aging (e.g., common-cause, sensory deprivation, cognitive load on perception, information degradation). Unfortunately, the majority of past studies examining this association have used correlational analyses, not allowing for these hypotheses to be tested sufficiently. This correlational issue is especially relevant for the information degradation hypothesis, which states that degraded perceptual signal inputs, resulting from either age-related neurobiological processes (e.g., retinal degeneration) or experimental manipulations (e.g., reduced visual contrast), lead to errors in perceptual processing, which in turn may affect non-perceptual, higher-order cognitive processes. Even though the majority of studies examining the relation between age-related cognitive and perceptual decline have been correlational, we reviewed several studies demonstrating that visual manipulations affect both younger and older adults’ cognitive performance, supporting the information degradation hypothesis and contradicting implications of other hypotheses (e.g., common-cause, sensory deprivation, cognitive load on perception). The reviewed evidence indicates the necessity to further examine the information degradation hypothesis in order to identify mechanisms underlying age-related cognitive decline. PMID:27484869

  15. Perceptual Contrast Enhancement with Dynamic Range Adjustment

    PubMed Central

    Zhang, Hong; Li, Yuecheng; Chen, Hao; Yuan, Ding; Sun, Mingui

    2013-01-01

    Recent years, although great efforts have been made to improve its performance, few Histogram equalization (HE) methods take human visual perception (HVP) into account explicitly. The human visual system (HVS) is more sensitive to edges than brightness. This paper proposes to take use of this nature intuitively and develops a perceptual contrast enhancement approach with dynamic range adjustment through histogram modification. The use of perceptual contrast connects the image enhancement problem with the HVS. To pre-condition the input image before the HE procedure is implemented, a perceptual contrast map (PCM) is constructed based on the modified Difference of Gaussian (DOG) algorithm. As a result, the contrast of the image is sharpened and high frequency noise is suppressed. A modified Clipped Histogram Equalization (CHE) is also developed which improves visual quality by automatically detecting the dynamic range of the image with improved perceptual contrast. Experimental results show that the new HE algorithm outperforms several state-of-the-art algorithms in improving perceptual contrast and enhancing details. In addition, the new algorithm is simple to implement, making it suitable for real-time applications. PMID:24339452

  16. Perceptual load-dependent neural correlates of distractor interference inhibition.

    PubMed

    Xu, Jiansong; Monterosso, John; Kober, Hedy; Balodis, Iris M; Potenza, Marc N

    2011-01-18

    The load theory of selective attention hypothesizes that distractor interference is suppressed after perceptual processing (i.e., in the later stage of central processing) at low perceptual load of the central task, but in the early stage of perceptual processing at high perceptual load. Consistently, studies on the neural correlates of attention have found a smaller distractor-related activation in the sensory cortex at high relative to low perceptual load. However, it is not clear whether the distractor-related activation in brain regions linked to later stages of central processing (e.g., in the frontostriatal circuits) is also smaller at high rather than low perceptual load, as might be predicted based on the load theory. We studied 24 healthy participants using functional magnetic resonance imaging (fMRI) during a visual target identification task with two perceptual loads (low vs. high). Participants showed distractor-related increases in activation in the midbrain, striatum, occipital and medial and lateral prefrontal cortices at low load, but distractor-related decreases in activation in the midbrain ventral tegmental area and substantia nigra (VTA/SN), striatum, thalamus, and extensive sensory cortices at high load. Multiple levels of central processing involving midbrain and frontostriatal circuits participate in suppressing distractor interference at either low or high perceptual load. For suppressing distractor interference, the processing of sensory inputs in both early and late stages of central processing are enhanced at low load but inhibited at high load.

  17. Perceptual and processing differences between physical and dichorhinic odor mixtures.

    PubMed

    Schütze, M; Negoias, S; Olsson, M J; Hummel, T

    2014-01-31

    Perceptual integration of sensory input from our two nostrils has received little attention in comparison to lateralized inputs for vision and hearing. Here, we investigated whether a binary odor mixture of eugenol and l-carvone (smells of cloves and caraway) would be perceived differently if presented as a mixture in one nostril (physical mixture), vs. the same two odorants in separate nostrils (dichorhinic mixture). In parallel, we investigated whether the different types of presentation resulted in differences in olfactory event-related potentials (OERP). Psychophysical ratings showed that the dichorhinic mixtures were perceived as more intense than the physical mixtures. A tendency for shift in perceived quality was also observed. In line with these perceptual changes, the OERP showed a shift in latencies and amplitudes for early (more "sensory") peaks P1 and N1 whereas no significant differences were observed for the later (more "cognitive") peak P2. The results altogether suggest that the peripheral level is a site of interaction between odorants. Both psychophysical ratings and, for the first time, electrophysiological measurements converge on this conclusion. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  18. Competing streams at the cocktail party: Exploring the mechanisms of attention and temporal integration

    PubMed Central

    Xiang, Juanjuan; Simon, Jonathan; Elhilali, Mounya

    2010-01-01

    Processing of complex acoustic scenes depends critically on the temporal integration of sensory information as sounds evolve naturally over time. It has been previously speculated that this process is guided by both innate mechanisms of temporal processing in the auditory system, as well as top-down mechanisms of attention, and possibly other schema-based processes. In an effort to unravel the neural underpinnings of these processes and their role in scene analysis, we combine Magnetoencephalography (MEG) with behavioral measures in humans in the context of polyrhythmic tone sequences. While maintaining unchanged sensory input, we manipulate subjects’ attention to one of two competing rhythmic streams in the same sequence. The results reveal that the neural representation of the attended rhythm is significantly enhanced both in its steady-state power and spatial phase coherence relative to its unattended state, closely correlating with its perceptual detectability for each listener. Interestingly, the data reveals a differential efficiency of rhythmic rates of the order of few hertz during the streaming process, closely following known neural and behavioral measures of temporal modulation sensitivity in the auditory system. These findings establish a direct link between known temporal modulation tuning in the auditory system (particularly at the level of auditory cortex) and the temporal integration of perceptual features in a complex acoustic scene, while mediated by processes of attention. PMID:20826671

  19. Perceptual Learning, Cognition, and Expertise

    ERIC Educational Resources Information Center

    Kellman, Philip J.; Massey, Christine M.

    2013-01-01

    Recent research indicates that perceptual learning (PL)--experience-induced changes in the way perceivers extract information--plays a larger role in complex cognitive tasks, including abstract and symbolic domains, than has been understood in theory or implemented in instruction. Here, we describe the involvement of PL in complex cognitive tasks…

  20. On the independence of visual awareness and metacognition: a signal detection theoretic analysis.

    PubMed

    Jachs, Barbara; Blanco, Manuel J; Grantham-Hill, Sarah; Soto, David

    2015-04-01

    Classically, visual awareness and metacognition are thought to be intimately linked, with our knowledge of the correctness of perceptual choices (henceforth metacognition) being dependent on the level of stimulus awareness. Here we used a signal detection theoretic approach involving a Gabor orientation discrimination task in conjunction with trial-by-trial ratings of perceptual awareness and response confidence in order to gauge estimates of type-1 (perceptual) orientation sensitivity and type-2 (metacognitive) sensitivity at different levels of stimulus awareness. Data from three experiments indicate that while the level of stimulus awareness had a profound impact on type-1 perceptual sensitivity, the awareness effect on type-2 metacognitive sensitivity was far lower by comparison. The present data pose a challenge for signal detection theoretic models in which both type-1 (perceptual) and type-2 (metacognitive) processes are assumed to operate on the same input. More broadly, the findings challenge the commonly held view that metacognition is tightly coupled to conscious states. (c) 2015 APA, all rights reserved.

  1. Performance of Cerebral Palsied Children under Conditions of Reduced Auditory Input on Selected Intellectual, Cognitive and Perceptual Tasks.

    ERIC Educational Resources Information Center

    Fassler, Joan

    The study investigated the task performance of cerebral palsied children under conditions of reduced auditory input and under normal auditory conditions. A non-cerebral palsied group was studied in a similar manner. Results indicated that cerebral palsied children showed some positive change in performance, under conditions of reduced auditory…

  2. Perceptual Grouping Enhances Visual Plasticity

    PubMed Central

    Mastropasqua, Tommaso; Turatto, Massimo

    2013-01-01

    Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity. PMID:23301100

  3. Integrating mechanisms of visual guidance in naturalistic language production.

    PubMed

    Coco, Moreno I; Keller, Frank

    2015-05-01

    Situated language production requires the integration of visual attention and linguistic processing. Previous work has not conclusively disentangled the role of perceptual scene information and structural sentence information in guiding visual attention. In this paper, we present an eye-tracking study that demonstrates that three types of guidance, perceptual, conceptual, and structural, interact to control visual attention. In a cued language production experiment, we manipulate perceptual (scene clutter) and conceptual guidance (cue animacy) and measure structural guidance (syntactic complexity of the utterance). Analysis of the time course of language production, before and during speech, reveals that all three forms of guidance affect the complexity of visual responses, quantified in terms of the entropy of attentional landscapes and the turbulence of scan patterns, especially during speech. We find that perceptual and conceptual guidance mediate the distribution of attention in the scene, whereas structural guidance closely relates to scan pattern complexity. Furthermore, the eye-voice span of the cued object and its perceptual competitor are similar; its latency mediated by both perceptual and structural guidance. These results rule out a strict interpretation of structural guidance as the single dominant form of visual guidance in situated language production. Rather, the phase of the task and the associated demands of cross-modal cognitive processing determine the mechanisms that guide attention.

  4. Neuropsychiatry of complex visual hallucinations.

    PubMed

    Mocellin, Ramon; Walterfang, Mark; Velakoulis, Dennis

    2006-09-01

    To describe the phenomenology and pathophysiology of complex visual hallucinations (CVH) in various organic states, in particular Charles Bonnet syndrome and peduncular hallucinosis. Three cases of CVH in the setting of pontine infarction, thalamic infarction and temporoparietal epileptiform activity are presented and the available psychiatric, neurological and biological literature on the structures of the central nervous system involved in producing hallucinatory states is reviewed. Complex visual hallucinations can arise from a variety of processes involving the retinogeniculocalcarine tract, or ascending brainstem modulatory structures. The cortical activity responsible for hallucinations results from altered or reduced input into these regions, or a loss of ascending inhibition of their afferent pathways. A significant degree of overlaps exists between the concepts of Charles Bonnet syndrome and peduncular hallucinosis. The fluidity of these eponymous syndromes reduces their validity and meaning, and may result in an inappropriate attribution of the underlying pathology. An understanding of how differing pathologies may produce CVH allows for the appropriate tailoring of treatment, depending on the site and nature of the lesion and content of perceptual disturbance.

  5. Feed Forward Programming of Car Drivers’ Eye Movement Behavior. A System Theoretical Approach. Volume 1

    DTIC Science & Technology

    1980-02-01

    Standardization February 1980 13. NUMBER OF PAGES Group ( UK ), Box 65, FPO New York 09510 216 14. MONITORING AGENCY NAME A ADDRESS(I differmt bar...with lateral information input, s they- xate on the road’s lines frequently. Beginners, as driving tiea - chers report, have great difficulties in...perceptual system. Boston, Houghton Mifflin Co., 1966. GIBSON, J.J.: Principles of perceptual learning and develop- ment. New Jersey , Prentice Hall Inc

  6. Dynamics of infant cortical auditory evoked potentials (CAEPs) for tone and speech tokens.

    PubMed

    Cone, Barbara; Whitaker, Richard

    2013-07-01

    Cortical auditory evoked potentials (CAEPs) to tones and speech sounds were obtained in infants to: (1) further knowledge of auditory development above the level of the brainstem during the first year of life; (2) establish CAEP input-output functions for tonal and speech stimuli as a function of stimulus level and (3) elaborate the data-base that establishes CAEP in infants tested while awake using clinically relevant stimuli, thus providing methodology that would have translation to pediatric audiological assessment. Hypotheses concerning CAEP development were that the latency and amplitude input-output functions would reflect immaturity in encoding stimulus level. In a second experiment, infants were tested with the same stimuli used to evoke the CAEPs. Thresholds for these stimuli were determined using observer-based psychophysical techniques. The hypothesis was that the behavioral thresholds would be correlated with CAEP input-output functions because of shared cortical response areas known to be active in sound detection. 36 infants, between the ages of 4 and 12 months (mean=8 months, s.d.=1.8 months) and 9 young adults (mean age 21 years) with normal hearing were tested. First, CAEPs amplitude and latency input-output functions were obtained for 4 tone bursts and 7 speech tokens. The tone bursts stimuli were 50 ms tokens of pure tones at 0.5, 1.0, 2.0 and 4.0 kHz. The speech sound tokens, /a/, /i/, /o/, /u/, /m/, /s/, and /∫/, were created from natural speech samples and were also 50 ms in duration. CAEPs were obtained for tone burst and speech token stimuli at 10 dB level decrements in descending order from 70 dB SPL. All CAEP tests were completed while the infants were awake and engaged in quiet play. For the second experiment, observer-based psychophysical methods were used to establish perceptual threshold for the same speech sound and tone tokens. Infant CAEP component latencies were prolonged by 100-150 ms in comparison to adults. CAEP latency-intensity input output functions were steeper in infants compared to adults. CAEP amplitude growth functions with respect to stimulus SPL are adult-like at this age, particularly for the earliest component, P1-N1. Infant perceptual thresholds were elevated with respect to those found in adults. Furthermore, perceptual thresholds were higher, on average, than levels at which CAEPs could be obtained. When CAEP amplitudes were plotted with respect to perceptual threshold (dB SL), the infant CAEP amplitude growth slopes were steeper than in adults. Although CAEP latencies indicate immaturity in neural transmission at the level of the cortex, amplitude growth with respect to stimulus SPL is adult-like at this age, particularly for the earliest component, P1-N1. The latency and amplitude input-output functions may provide additional information as to how infants perceive stimulus level. The reasons for the discrepancy between electrophysiologic and perceptual threshold may be due to immaturity in perceptual temporal resolution abilities and the broad-band listening strategy employed by infants. The findings from the current study can be translated to the clinical setting. It is possible to use tonal or speech sound tokens to evoke CAEPs in an awake, passively alert infant, and thus determine whether these sounds activate the auditory cortex. This could be beneficial in the verification of hearing aid or cochlear implant benefit. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Self-motion Perception Training: Thresholds Improve in the Light but not in the Dark

    PubMed Central

    Hartmann, Matthias; Furrer, Sarah; Herzog, Michael H.; Merfeld, Daniel M.; Mast, Fred W.

    2014-01-01

    We investigated perceptual learning in self-motion perception. Blindfolded participants were displaced leftward or rightward by means of a motion platform, and asked to indicate the direction of motion. A total of eleven participants underwent 3360 practice trials, distributed over twelve (Experiment 1) or six days (Experiment 2). We found no improvement in motion discrimination in both experiments. These results are surprising since perceptual learning has been demonstrated for visual, auditory, and somatosensory discrimination. Improvements in the same task were found when visual input was provided (Experiment 3). The multisensory nature of vestibular information is discussed as a possible explanation of the absence of perceptual learning in darkness. PMID:23392475

  8. Learning viewpoint invariant perceptual representations from cluttered images.

    PubMed

    Spratling, Michael W

    2005-05-01

    In order to perform object recognition, it is necessary to form perceptual representations that are sufficiently specific to distinguish between objects, but that are also sufficiently flexible to generalize across changes in location, rotation, and scale. A standard method for learning perceptual representations that are invariant to viewpoint is to form temporal associations across image sequences showing object transformations. However, this method requires that individual stimuli be presented in isolation and is therefore unlikely to succeed in real-world applications where multiple objects can co-occur in the visual input. This paper proposes a simple modification to the learning method that can overcome this limitation and results in more robust learning of invariant representations.

  9. Visual input enhances selective speech envelope tracking in auditory cortex at a "cocktail party".

    PubMed

    Zion Golumbic, Elana; Cogan, Gregory B; Schroeder, Charles E; Poeppel, David

    2013-01-23

    Our ability to selectively attend to one auditory signal amid competing input streams, epitomized by the "Cocktail Party" problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared with responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker's face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a Cocktail Party setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive.

  10. Enhanced Perceptual Functioning in Autism: An Update, and Eight Principles of Autistic Perception

    ERIC Educational Resources Information Center

    Mottron, Laurent; Dawson, Michelle; Soulieres, Isabelle; Hubert, Benedicte; Burack, Jake

    2006-01-01

    We propose an "Enhanced Perceptual Functioning" model encompassing the main differences between autistic and non-autistic social and non-social perceptual processing: locally oriented visual and auditory perception, enhanced low-level discrimination, use of a more posterior network in "complex" visual tasks, enhanced perception…

  11. Altered functional connectivity of the amygdaloid input nuclei in adolescents and young adults with autism spectrum disorder: a resting state fMRI study.

    PubMed

    Rausch, Annika; Zhang, Wei; Haak, Koen V; Mennes, Maarten; Hermans, Erno J; van Oort, Erik; van Wingen, Guido; Beckmann, Christian F; Buitelaar, Jan K; Groen, Wouter B

    2016-01-01

    Amygdala dysfunction is hypothesized to underlie the social deficits observed in autism spectrum disorders (ASD). However, the neurobiological basis of this hypothesis is underspecified because it is unknown whether ASD relates to abnormalities of the amygdaloid input or output nuclei. Here, we investigated the functional connectivity of the amygdaloid social-perceptual input nuclei and emotion-regulation output nuclei in ASD versus controls. We collected resting state functional magnetic resonance imaging (fMRI) data, tailored to provide optimal sensitivity in the amygdala as well as the neocortex, in 20 adolescents and young adults with ASD and 25 matched controls. We performed a regular correlation analysis between the entire amygdala (EA) and the whole brain and used a partial correlation analysis to investigate whole-brain functional connectivity uniquely related to each of the amygdaloid subregions. Between-group comparison of regular EA correlations showed significantly reduced connectivity in visuospatial and superior parietal areas in ASD compared to controls. Partial correlation analysis revealed that this effect was driven by the left superficial and right laterobasal input subregions, but not the centromedial output nuclei. These results indicate reduced connectivity of specifically the amygdaloid sensory input channels in ASD, suggesting that abnormal amygdalo-cortical connectivity can be traced down to the socio-perceptual pathways.

  12. A connectionist model of category learning by individuals with high-functioning autism spectrum disorder.

    PubMed

    Dovgopoly, Alexander; Mercado, Eduardo

    2013-06-01

    Individuals with autism spectrum disorder (ASD) show atypical patterns of learning and generalization. We explored the possible impacts of autism-related neural abnormalities on perceptual category learning using a neural network model of visual cortical processing. When applied to experiments in which children or adults were trained to classify complex two-dimensional images, the model can account for atypical patterns of perceptual generalization. This is only possible, however, when individual differences in learning are taken into account. In particular, analyses performed with a self-organizing map suggested that individuals with high-functioning ASD show two distinct generalization patterns: one that is comparable to typical patterns, and a second in which there is almost no generalization. The model leads to novel predictions about how individuals will generalize when trained with simplified input sets and can explain why some researchers have failed to detect learning or generalization deficits in prior studies of category learning by individuals with autism. On the basis of these simulations, we propose that deficits in basic neural plasticity mechanisms may be sufficient to account for the atypical patterns of perceptual category learning and generalization associated with autism, but they do not account for why only a subset of individuals with autism would show such deficits. If variations in performance across subgroups reflect heterogeneous neural abnormalities, then future behavioral and neuroimaging studies of individuals with ASD will need to account for such disparities.

  13. Perceptual Load-Dependent Neural Correlates of Distractor Interference Inhibition

    PubMed Central

    Xu, Jiansong; Monterosso, John; Kober, Hedy; Balodis, Iris M.; Potenza, Marc N.

    2011-01-01

    Background The load theory of selective attention hypothesizes that distractor interference is suppressed after perceptual processing (i.e., in the later stage of central processing) at low perceptual load of the central task, but in the early stage of perceptual processing at high perceptual load. Consistently, studies on the neural correlates of attention have found a smaller distractor-related activation in the sensory cortex at high relative to low perceptual load. However, it is not clear whether the distractor-related activation in brain regions linked to later stages of central processing (e.g., in the frontostriatal circuits) is also smaller at high rather than low perceptual load, as might be predicted based on the load theory. Methodology/Principal Findings We studied 24 healthy participants using functional magnetic resonance imaging (fMRI) during a visual target identification task with two perceptual loads (low vs. high). Participants showed distractor-related increases in activation in the midbrain, striatum, occipital and medial and lateral prefrontal cortices at low load, but distractor-related decreases in activation in the midbrain ventral tegmental area and substantia nigra (VTA/SN), striatum, thalamus, and extensive sensory cortices at high load. Conclusions Multiple levels of central processing involving midbrain and frontostriatal circuits participate in suppressing distractor interference at either low or high perceptual load. For suppressing distractor interference, the processing of sensory inputs in both early and late stages of central processing are enhanced at low load but inhibited at high load. PMID:21267080

  14. Evidence accumulation detected in BOLD signal using slow perceptual decision making.

    PubMed

    Krueger, Paul M; van Vugt, Marieke K; Simen, Patrick; Nystrom, Leigh; Holmes, Philip; Cohen, Jonathan D

    2017-04-01

    We assessed whether evidence accumulation could be observed in the BOLD signal during perceptual decision making. This presents a challenge since the hemodynamic response is slow, while perceptual decisions are typically fast. Guided by theoretical predictions of the drift diffusion model, we slowed down decisions by penalizing participants for incorrect responses. Second, we distinguished BOLD activity related to stimulus detection (modeled using a boxcar) from activity related to integration (modeled using a ramp) by minimizing the collinearity of GLM regressors. This was achieved by dissecting a boxcar into its two most orthogonal components: an "up-ramp" and a "down-ramp." Third, we used a control condition in which stimuli and responses were similar to the experimental condition, but that did not engage evidence accumulation of the stimuli. The results revealed an absence of areas in parietal cortex that have been proposed to drive perceptual decision making but have recently come into question; and newly identified regions that are candidates for involvement in evidence accumulation. Previous fMRI studies have either used fast perceptual decision making, which precludes the measurement of evidence accumulation, or slowed down responses by gradually revealing stimuli. The latter approach confounds perceptual detection with evidence accumulation because accumulation is constrained by perceptual input. We slowed down the decision making process itself while leaving perceptual information intact. This provided a more sensitive and selective observation of brain regions associated with the evidence accumulation processes underlying perceptual decision making than previous methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Altering sensorimotor feedback disrupts visual discrimination of facial expressions.

    PubMed

    Wood, Adrienne; Lupyan, Gary; Sherrin, Steven; Niedenthal, Paula

    2016-08-01

    Looking at another person's facial expression of emotion can trigger the same neural processes involved in producing the expression, and such responses play a functional role in emotion recognition. Disrupting individuals' facial action, for example, interferes with verbal emotion recognition tasks. We tested the hypothesis that facial responses also play a functional role in the perceptual processing of emotional expressions. We altered the facial action of participants with a gel facemask while they performed a task that involved distinguishing target expressions from highly similar distractors. Relative to control participants, participants in the facemask condition demonstrated inferior perceptual discrimination of facial expressions, but not of nonface stimuli. The findings suggest that somatosensory/motor processes involving the face contribute to the visual perceptual-and not just conceptual-processing of facial expressions. More broadly, our study contributes to growing evidence for the fundamentally interactive nature of the perceptual inputs from different sensory modalities.

  16. Effects of emotional and perceptual-motor stress on a voice recognition system's accuracy: An applied investigation

    NASA Astrophysics Data System (ADS)

    Poock, G. K.; Martin, B. J.

    1984-02-01

    This was an applied investigation examining the ability of a speech recognition system to recognize speakers' inputs when the speakers were under different stress levels. Subjects were asked to speak to a voice recognition system under three conditions: (1) normal office environment, (2) emotional stress, and (3) perceptual-motor stress. Results indicate a definite relationship between voice recognition system performance and the type of low stress reference patterns used to achieve recognition.

  17. Perceptual advantage for category-relevant perceptual dimensions: the case of shape and motion.

    PubMed

    Folstein, Jonathan R; Palmeri, Thomas J; Gauthier, Isabel

    2014-01-01

    Category learning facilitates perception along relevant stimulus dimensions, even when tested in a discrimination task that does not require categorization. While this general phenomenon has been demonstrated previously, perceptual facilitation along dimensions has been documented by measuring different specific phenomena in different studies using different kinds of objects. Across several object domains, there is support for acquired distinctiveness, the stretching of a perceptual dimension relevant to learned categories. Studies using faces and studies using simple separable visual dimensions have also found evidence of acquired equivalence, the shrinking of a perceptual dimension irrelevant to learned categories, and categorical perception, the local stretching across the category boundary. These later two effects are rarely observed with complex non-face objects. Failures to find these effects with complex non-face objects may have been because the dimensions tested previously were perceptually integrated. Here we tested effects of category learning with non-face objects categorized along dimensions that have been found to be processed by different areas of the brain, shape and motion. While we replicated acquired distinctiveness, we found no evidence for acquired equivalence or categorical perception.

  18. A global/local affinity graph for image segmentation.

    PubMed

    Xiaofang Wang; Yuxing Tang; Masnou, Simon; Liming Chen

    2015-04-01

    Construction of a reliable graph capturing perceptual grouping cues of an image is fundamental for graph-cut based image segmentation methods. In this paper, we propose a novel sparse global/local affinity graph over superpixels of an input image to capture both short- and long-range grouping cues, and thereby enabling perceptual grouping laws, including proximity, similarity, continuity, and to enter in action through a suitable graph-cut algorithm. Moreover, we also evaluate three major visual features, namely, color, texture, and shape, for their effectiveness in perceptual segmentation and propose a simple graph fusion scheme to implement some recent findings from psychophysics, which suggest combining these visual features with different emphases for perceptual grouping. In particular, an input image is first oversegmented into superpixels at different scales. We postulate a gravitation law based on empirical observations and divide superpixels adaptively into small-, medium-, and large-sized sets. Global grouping is achieved using medium-sized superpixels through a sparse representation of superpixels' features by solving a ℓ0-minimization problem, and thereby enabling continuity or propagation of local smoothness over long-range connections. Small- and large-sized superpixels are then used to achieve local smoothness through an adjacent graph in a given feature space, and thus implementing perceptual laws, for example, similarity and proximity. Finally, a bipartite graph is also introduced to enable propagation of grouping cues between superpixels of different scales. Extensive experiments are carried out on the Berkeley segmentation database in comparison with several state-of-the-art graph constructions. The results show the effectiveness of the proposed approach, which outperforms state-of-the-art graphs using four different objective criteria, namely, the probabilistic rand index, the variation of information, the global consistency error, and the boundary displacement error.

  19. A Neuronal Network Model for Pitch Selectivity and Representation

    PubMed Central

    Huang, Chengcheng; Rinzel, John

    2016-01-01

    Pitch is a perceptual correlate of periodicity. Sounds with distinct spectra can elicit the same pitch. Despite the importance of pitch perception, understanding the cellular mechanism of pitch perception is still a major challenge and a mechanistic model of pitch is lacking. A multi-stage neuronal network model is developed for pitch frequency estimation using biophysically-based, high-resolution coincidence detector neurons. The neuronal units respond only to highly coincident input among convergent auditory nerve fibers across frequency channels. Their selectivity for only very fast rising slopes of convergent input enables these slope-detectors to distinguish the most prominent coincidences in multi-peaked input time courses. Pitch can then be estimated from the first-order interspike intervals of the slope-detectors. The regular firing pattern of the slope-detector neurons are similar for sounds sharing the same pitch despite the distinct timbres. The decoded pitch strengths also correlate well with the salience of pitch perception as reported by human listeners. Therefore, our model can serve as a neural representation for pitch. Our model performs successfully in estimating the pitch of missing fundamental complexes and reproducing the pitch variation with respect to the frequency shift of inharmonic complexes. It also accounts for the phase sensitivity of pitch perception in the cases of Schroeder phase, alternating phase and random phase relationships. Moreover, our model can also be applied to stochastic sound stimuli, iterated-ripple-noise, and account for their multiple pitch perceptions. PMID:27378900

  20. A Neuronal Network Model for Pitch Selectivity and Representation.

    PubMed

    Huang, Chengcheng; Rinzel, John

    2016-01-01

    Pitch is a perceptual correlate of periodicity. Sounds with distinct spectra can elicit the same pitch. Despite the importance of pitch perception, understanding the cellular mechanism of pitch perception is still a major challenge and a mechanistic model of pitch is lacking. A multi-stage neuronal network model is developed for pitch frequency estimation using biophysically-based, high-resolution coincidence detector neurons. The neuronal units respond only to highly coincident input among convergent auditory nerve fibers across frequency channels. Their selectivity for only very fast rising slopes of convergent input enables these slope-detectors to distinguish the most prominent coincidences in multi-peaked input time courses. Pitch can then be estimated from the first-order interspike intervals of the slope-detectors. The regular firing pattern of the slope-detector neurons are similar for sounds sharing the same pitch despite the distinct timbres. The decoded pitch strengths also correlate well with the salience of pitch perception as reported by human listeners. Therefore, our model can serve as a neural representation for pitch. Our model performs successfully in estimating the pitch of missing fundamental complexes and reproducing the pitch variation with respect to the frequency shift of inharmonic complexes. It also accounts for the phase sensitivity of pitch perception in the cases of Schroeder phase, alternating phase and random phase relationships. Moreover, our model can also be applied to stochastic sound stimuli, iterated-ripple-noise, and account for their multiple pitch perceptions.

  1. Semantic Features, Perceptual Expectations, and Frequency as Factors in the Learning of Polar Spatial Adjective Concepts.

    ERIC Educational Resources Information Center

    Dunckley, Candida J. Lutes; Radtke, Robert C.

    Two semantic theories of word learning, a perceptual complexity hypothesis (H. Clark, 1970) and a quantitative complexity hypothesis (E. Clark, 1972) were tested by teaching 24 preschoolers and 16 college students CVC labels for five polar spatial adjective concepts having single word representations in English, and for three having no direct…

  2. The development of sentence interpretation: effects of perceptual, attentional and semantic interference.

    PubMed

    Leech, Robert; Aydelott, Jennifer; Symons, Germaine; Carnevale, Julia; Dick, Frederic

    2007-11-01

    How does the development and consolidation of perceptual, attentional, and higher cognitive abilities interact with language acquisition and processing? We explored children's (ages 5-17) and adults' (ages 18-51) comprehension of morphosyntactically varied sentences under several competing speech conditions that varied in the degree of attentional demands, auditory masking, and semantic interference. We also evaluated the relationship between subjects' syntactic comprehension and their word reading efficiency and general 'speed of processing'. We found that the interactions between perceptual and attentional processes and complex sentence interpretation changed considerably over the course of development. Perceptual masking of the speech signal had an early and lasting impact on comprehension, particularly for more complex sentence structures. In contrast, increased attentional demand in the absence of energetic auditory masking primarily affected younger children's comprehension of difficult sentence types. Finally, the predictability of syntactic comprehension abilities by external measures of development and expertise is contingent upon the perceptual, attentional, and semantic milieu in which language processing takes place.

  3. Visual Input Enhances Selective Speech Envelope Tracking in Auditory Cortex at a ‘Cocktail Party’

    PubMed Central

    Golumbic, Elana Zion; Cogan, Gregory B.; Schroeder, Charles E.; Poeppel, David

    2013-01-01

    Our ability to selectively attend to one auditory signal amidst competing input streams, epitomized by the ‘Cocktail Party’ problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared to responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic (MEG) signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker’s face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a ‘Cocktail Party’ setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive. PMID:23345218

  4. Auditory Scene Analysis: The Sweet Music of Ambiguity

    PubMed Central

    Pressnitzer, Daniel; Suied, Clara; Shamma, Shihab A.

    2011-01-01

    In this review paper aimed at the non-specialist, we explore the use that neuroscientists and musicians have made of perceptual illusions based on ambiguity. The pivotal issue is auditory scene analysis (ASA), or what enables us to make sense of complex acoustic mixtures in order to follow, for instance, a single melody in the midst of an orchestra. In general, ASA uncovers the most likely physical causes that account for the waveform collected at the ears. However, the acoustical problem is ill-posed and it must be solved from noisy sensory input. Recently, the neural mechanisms implicated in the transformation of ambiguous sensory information into coherent auditory scenes have been investigated using so-called bistability illusions (where an unchanging ambiguous stimulus evokes a succession of distinct percepts in the mind of the listener). After reviewing some of those studies, we turn to music, which arguably provides some of the most complex acoustic scenes that a human listener will ever encounter. Interestingly, musicians will not always aim at making each physical source intelligible, but rather express one or more melodic lines with a small or large number of instruments. By means of a few musical illustrations and by using a computational model inspired by neuro-physiological principles, we suggest that this relies on a detailed (if perhaps implicit) knowledge of the rules of ASA and of its inherent ambiguity. We then put forward the opinion that some degree perceptual ambiguity may participate in our appreciation of music. PMID:22174701

  5. Visual complexity: a review.

    PubMed

    Donderi, Don C

    2006-01-01

    The idea of visual complexity, the history of its measurement, and its implications for behavior are reviewed, starting with structuralism and Gestalt psychology at the beginning of the 20th century and ending with visual complexity theory, perceptual learning theory, and neural circuit theory at the beginning of the 21st. Evidence is drawn from research on single forms, form and texture arrays and visual displays. Form complexity and form probability are shown to be linked through their reciprocal relationship in complexity theory, which is in turn shown to be consistent with recent developments in perceptual learning and neural circuit theory. Directions for further research are suggested.

  6. Integration of Canal and Otolith Inputs by Central Vestibular Neurons Is Subadditive for Both Active and Passive Self-Motion: Implication for Perception

    PubMed Central

    Carriot, Jerome; Jamali, Mohsen; Brooks, Jessica X.

    2015-01-01

    Traditionally, the neural encoding of vestibular information is studied by applying either passive rotations or translations in isolation. However, natural vestibular stimuli are typically more complex. During everyday life, our self-motion is generally not restricted to one dimension, but rather comprises both rotational and translational motion that will simultaneously stimulate receptors in the semicircular canals and otoliths. In addition, natural self-motion is the result of self-generated and externally generated movements. However, to date, it remains unknown how information about rotational and translational components of self-motion is integrated by vestibular pathways during active and/or passive motion. Accordingly, here, we compared the responses of neurons at the first central stage of vestibular processing to rotation, translation, and combined motion. Recordings were made in alert macaques from neurons in the vestibular nuclei involved in postural control and self-motion perception. In response to passive stimulation, neurons did not combine canal and otolith afferent information linearly. Instead, inputs were subadditively integrated with a weighting that was frequency dependent. Although canal inputs were more heavily weighted at low frequencies, the weighting of otolith input increased with frequency. In response to active stimulation, neuronal modulation was significantly attenuated (∼70%) relative to passive stimulation for rotations and translations and even more profoundly attenuated for combined motion due to subadditive input integration. Together, these findings provide insights into neural computations underlying the integration of semicircular canal and otolith inputs required for accurate posture and motor control, as well as perceptual stability, during everyday life. PMID:25716854

  7. Selective interference with image retention and generation: evidence for the workspace model.

    PubMed

    van der Meulen, Marian; Logie, Robert H; Della Sala, Sergio

    2009-08-01

    We address three types of model of the relationship between working memory (WM) and long-term memory (LTM): (a) the gateway model, in which WM acts as a gateway between perceptual input and LTM; (b) the unitary model, in which WM is seen as the currently activated areas of LTM; and (c) the workspace model, in which perceptual input activates LTM, and WM acts as a separate workspace for processing and temporary retention of these activated traces. Predictions of these models were tested, focusing on visuospatial working memory and using dual-task methodology to combine two main tasks (visual short-term retention and image generation) with two interference tasks (irrelevant pictures and spatial tapping). The pictures selectively disrupted performance on the generation task, whereas the tapping selectively interfered with the retention task. Results are consistent with the predictions of the workspace model.

  8. Top-down cortical input during NREM sleep consolidates perceptual memory.

    PubMed

    Miyamoto, D; Hirai, D; Fung, C C A; Inutsuka, A; Odagawa, M; Suzuki, T; Boehringer, R; Adaikkan, C; Matsubara, C; Matsuki, N; Fukai, T; McHugh, T J; Yamanaka, A; Murayama, M

    2016-06-10

    During tactile perception, long-range intracortical top-down axonal projections are essential for processing sensory information. Whether these projections regulate sleep-dependent long-term memory consolidation is unknown. We altered top-down inputs from higher-order cortex to sensory cortex during sleep and examined the consolidation of memories acquired earlier during awake texture perception. Mice learned novel textures and consolidated them during sleep. Within the first hour of non-rapid eye movement (NREM) sleep, optogenetic inhibition of top-down projecting axons from secondary motor cortex (M2) to primary somatosensory cortex (S1) impaired sleep-dependent reactivation of S1 neurons and memory consolidation. In NREM sleep and sleep-deprivation states, closed-loop asynchronous or synchronous M2-S1 coactivation, respectively, reduced or prolonged memory retention. Top-down cortical information flow in NREM sleep is thus required for perceptual memory consolidation. Copyright © 2016, American Association for the Advancement of Science.

  9. The organization of perception and action in complex control skills

    NASA Technical Reports Server (NTRS)

    Miller, Richard A.; Jagacinski, Richard J.

    1989-01-01

    An attempt was made to describe the perceptual, cognitive, and action processes that account for highly skilled human performance in complex task environments. In order to study such a performance in a controlled setting, a laboratory task was constructed and three experiments were performed using human subjects. A general framework was developed for describing the organization of perceptual, cognitive, and action process.

  10. Continuously Adaptive vs. Discrete Changes of Task Difficulty in the Training of a Complex Perceptual-Motor Task.

    ERIC Educational Resources Information Center

    Wood, Milton E.

    The purpose of the effort was to determine the benefits to be derived from the adaptive training technique of automatically adjusting task difficulty as a function of a student skill during early learning of a complex perceptual motor task. A digital computer provided the task dynamics, scoring, and adaptive control of a second-order, two-axis,…

  11. Pupil size tracks perceptual content and surprise.

    PubMed

    Kloosterman, Niels A; Meindertsma, Thomas; van Loon, Anouk M; Lamme, Victor A F; Bonneh, Yoram S; Donner, Tobias H

    2015-04-01

    Changes in pupil size at constant light levels reflect the activity of neuromodulatory brainstem centers that control global brain state. These endogenously driven pupil dynamics can be synchronized with cognitive acts. For example, the pupil dilates during the spontaneous switches of perception of a constant sensory input in bistable perceptual illusions. It is unknown whether this pupil dilation only indicates the occurrence of perceptual switches, or also their content. Here, we measured pupil diameter in human subjects reporting the subjective disappearance and re-appearance of a physically constant visual target surrounded by a moving pattern ('motion-induced blindness' illusion). We show that the pupil dilates during the perceptual switches in the illusion and a stimulus-evoked 'replay' of that illusion. Critically, the switch-related pupil dilation encodes perceptual content, with larger amplitude for disappearance than re-appearance. This difference in pupil response amplitude enables prediction of the type of report (disappearance vs. re-appearance) on individual switches (receiver-operating characteristic: 61%). The amplitude difference is independent of the relative durations of target-visible and target-invisible intervals and subjects' overt behavioral report of the perceptual switches. Further, we show that pupil dilation during the replay also scales with the level of surprise about the timing of switches, but there is no evidence for an interaction between the effects of surprise and perceptual content on the pupil response. Taken together, our results suggest that pupil-linked brain systems track both the content of, and surprise about, perceptual events. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  12. A Mechanistic Link from GABA to Cortical Architecture and Perception.

    PubMed

    Kolasinski, James; Logan, John P; Hinson, Emily L; Manners, Daniel; Divanbeighi Zand, Amir P; Makin, Tamar R; Emir, Uzay E; Stagg, Charlotte J

    2017-06-05

    Understanding both the organization of the human cortex and its relation to the performance of distinct functions is fundamental in neuroscience. The primary sensory cortices display topographic organization, whereby receptive fields follow a characteristic pattern, from tonotopy to retinotopy to somatotopy [1]. GABAergic signaling is vital to the maintenance of cortical receptive fields [2]; however, it is unclear how this fine-grain inhibition relates to measurable patterns of perception [3, 4]. Based on perceptual changes following perturbation of the GABAergic system, it is conceivable that the resting level of cortical GABAergic tone directly relates to the spatial specificity of activation in response to a given input [5-7]. The specificity of cortical activation can be considered in terms of cortical tuning: greater cortical tuning yields more localized recruitment of cortical territory in response to a given input. We applied a combination of fMRI, MR spectroscopy, and psychophysics to substantiate the link between the cortical neurochemical milieu, the tuning of cortical activity, and variability in perceptual acuity, using human somatosensory cortex as a model. We provide data that explain human perceptual acuity in terms of both the underlying cellular and metabolic processes. Specifically, higher concentrations of sensorimotor GABA are associated with more selective cortical tuning, which in turn is associated with enhanced perception. These results show anatomical and neurochemical specificity and are replicated in an independent cohort. The mechanistic link from neurochemistry to perception provides a vital step in understanding population variability in sensory behavior, informing metabolic therapeutic interventions to restore perceptual abilities clinically. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  13. Perceptual Learning: Use-Dependent Cortical Plasticity.

    PubMed

    Li, Wu

    2016-10-14

    Our perceptual abilities significantly improve with practice. This phenomenon, known as perceptual learning, offers an ideal window for understanding use-dependent changes in the adult brain. Different experimental approaches have revealed a diversity of behavioral and cortical changes associated with perceptual learning, and different interpretations have been given with respect to the cortical loci and neural processes responsible for the learning. Accumulated evidence has begun to put together a coherent picture of the neural substrates underlying perceptual learning. The emerging view is that perceptual learning results from a complex interplay between bottom-up and top-down processes, causing a global reorganization across cortical areas specialized for sensory processing, engaged in top-down attentional control, and involved in perceptual decision making. Future studies should focus on the interactions among cortical areas for a better understanding of the general rules and mechanisms underlying various forms of skill learning.

  14. Evoked-potential changes following discrimination learning involving complex sounds

    PubMed Central

    Orduña, Itzel; Liu, Estella H.; Church, Barbara A.; Eddins, Ann C.; Mercado, Eduardo

    2011-01-01

    Objective Perceptual sensitivities are malleable via learning, even in adults. We trained adults to discriminate complex sounds (periodic, frequency-modulated sweep trains) using two different training procedures, and used psychoacoustic tests and evoked potential measures (the N1-P2 complex) to assess changes in both perceptual and neural sensitivities. Methods Training took place either on a single day, or daily across eight days, and involved discrimination of pairs of stimuli using a single-interval, forced-choice task. In some participants, training started with dissimilar pairs that became progressively more similar across sessions, whereas in others training was constant, involving only one, highly similar, stimulus pair. Results Participants were better able to discriminate the complex sounds after training, particularly after progressive training, and the evoked potentials elicited by some of the sounds increased in amplitude following training. Significant amplitude changes were restricted to the P2 peak. Conclusion Our findings indicate that changes in perceptual sensitivities parallel enhanced neural processing. Significance These results are consistent with the proposal that changes in perceptual abilities arise from the brain’s capacity to adaptively modify cortical representations of sensory stimuli, and that different training regimens can lead to differences in cortical sensitivities, even after relatively short periods of training. PMID:21958655

  15. Subliminal stimulation and somatosensory signal detection.

    PubMed

    Ferrè, Elisa Raffaella; Sahani, Maneesh; Haggard, Patrick

    2016-10-01

    Only a small fraction of sensory signals is consciously perceived. The brain's perceptual systems may include mechanisms of feedforward inhibition that protect the cortex from subliminal noise, thus reserving cortical capacity and conscious awareness for significant stimuli. Here we provide a new view of these mechanisms based on signal detection theory, and gain control. We demonstrated that subliminal somatosensory stimulation decreased sensitivity for the detection of a subsequent somatosensory input, largely due to increased false alarm rates. By delivering the subliminal somatosensory stimulus and the to-be-detected somatosensory stimulus to different digits of the same hand, we show that this effect spreads across the sensory surface. In addition, subliminal somatosensory stimulation tended to produce an increased probability of responding "yes", whether the somatosensory stimulus was present or not. Our results suggest that subliminal stimuli temporarily reduce input gain, avoiding excessive responses to further small inputs. This gain control may be automatic, and may precede discriminative classification of inputs into signals or noise. Crucially, we found that subliminal inputs influenced false alarm rates only on blocks where the to-be-detected stimuli were present, and not on pre-test control blocks where they were absent. Participants appeared to adjust their perceptual criterion according to a statistical distribution of stimuli in the current context, with the presence of supraliminal stimuli having an important role in the criterion-setting process. These findings clarify the cognitive mechanisms that reserve conscious perception for salient and important signals. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Motivation enhances visual working memory capacity through the modulation of central cognitive processes.

    PubMed

    Sanada, Motoyuki; Ikeda, Koki; Kimura, Kenta; Hasegawa, Toshikazu

    2013-09-01

    Motivation is well known to enhance working memory (WM) capacity, but the mechanism underlying this effect remains unclear. The WM process can be divided into encoding, maintenance, and retrieval, and in a change detection visual WM paradigm, the encoding and retrieval processes can be subdivided into perceptual and central processing. To clarify which of these segments are most influenced by motivation, we measured ERPs in a change detection task with differential monetary rewards. The results showed that the enhancement of WM capacity under high motivation was accompanied by modulations of late central components but not those reflecting attentional control on perceptual inputs across all stages of WM. We conclude that the "state-dependent" shift of motivation impacted the central, rather than the perceptual functions in order to achieve better behavioral performances. Copyright © 2013 Society for Psychophysiological Research.

  17. Transformation priming helps to disambiguate sudden changes of sensory inputs.

    PubMed

    Pastukhov, Alexander; Vivian-Griffiths, Solveiga; Braun, Jochen

    2015-11-01

    Retinal input is riddled with abrupt transients due to self-motion, changes in illumination, object-motion, etc. Our visual system must correctly interpret each of these changes to keep visual perception consistent and sensitive. This poses an enormous challenge, as many transients are highly ambiguous in that they are consistent with many alternative physical transformations. Here we investigated inter-trial effects in three situations with sudden and ambiguous transients, each presenting two alternative appearances (rotation-reversing structure-from-motion, polarity-reversing shape-from-shading, and streaming-bouncing object collisions). In every situation, we observed priming of transformations as the outcome perceived in earlier trials tended to repeat in subsequent trials and this repetition was contingent on perceptual experience. The observed priming was specific to transformations and did not originate in priming of perceptual states preceding a transient. Moreover, transformation priming was independent of attention and specific to low level stimulus attributes. In summary, we show how "transformation priors" and experience-driven updating of such priors helps to disambiguate sudden changes of sensory inputs. We discuss how dynamic transformation priors can be instantiated as "transition energies" in an "energy landscape" model of the visual perception. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Playing Chess Unconsciously

    ERIC Educational Resources Information Center

    Kiesel, Andrea; Kunde, Wilfried; Pohl, Carsten; Berner, Michael P.; Hoffmann, Joachim

    2009-01-01

    Expertise in a certain stimulus domain enhances perceptual capabilities. In the present article, the authors investigate whether expertise improves perceptual processing to an extent that allows complex visual stimuli to bias behavior unconsciously. Expert chess players judged whether a target chess configuration entailed a checking configuration.…

  19. A predictive processing theory of sensorimotor contingencies: Explaining the puzzle of perceptual presence and its absence in synesthesia.

    PubMed

    Seth, Anil K

    2014-01-01

    Normal perception involves experiencing objects within perceptual scenes as real, as existing in the world. This property of "perceptual presence" has motivated "sensorimotor theories" which understand perception to involve the mastery of sensorimotor contingencies. However, the mechanistic basis of sensorimotor contingencies and their mastery has remained unclear. Sensorimotor theory also struggles to explain instances of perception, such as synesthesia, that appear to lack perceptual presence and for which relevant sensorimotor contingencies are difficult to identify. On alternative "predictive processing" theories, perceptual content emerges from probabilistic inference on the external causes of sensory signals, however, this view has addressed neither the problem of perceptual presence nor synesthesia. Here, I describe a theory of predictive perception of sensorimotor contingencies which (1) accounts for perceptual presence in normal perception, as well as its absence in synesthesia, and (2) operationalizes the notion of sensorimotor contingencies and their mastery. The core idea is that generative models underlying perception incorporate explicitly counterfactual elements related to how sensory inputs would change on the basis of a broad repertoire of possible actions, even if those actions are not performed. These "counterfactually-rich" generative models encode sensorimotor contingencies related to repertoires of sensorimotor dependencies, with counterfactual richness determining the degree of perceptual presence associated with a stimulus. While the generative models underlying normal perception are typically counterfactually rich (reflecting a large repertoire of possible sensorimotor dependencies), those underlying synesthetic concurrents are hypothesized to be counterfactually poor. In addition to accounting for the phenomenology of synesthesia, the theory naturally accommodates phenomenological differences between a range of experiential states including dreaming, hallucination, and the like. It may also lead to a new view of the (in)determinacy of normal perception.

  20. A predictive processing theory of sensorimotor contingencies: Explaining the puzzle of perceptual presence and its absence in synesthesia

    PubMed Central

    Seth, Anil K.

    2014-01-01

    Normal perception involves experiencing objects within perceptual scenes as real, as existing in the world. This property of “perceptual presence” has motivated “sensorimotor theories” which understand perception to involve the mastery of sensorimotor contingencies. However, the mechanistic basis of sensorimotor contingencies and their mastery has remained unclear. Sensorimotor theory also struggles to explain instances of perception, such as synesthesia, that appear to lack perceptual presence and for which relevant sensorimotor contingencies are difficult to identify. On alternative “predictive processing” theories, perceptual content emerges from probabilistic inference on the external causes of sensory signals, however, this view has addressed neither the problem of perceptual presence nor synesthesia. Here, I describe a theory of predictive perception of sensorimotor contingencies which (1) accounts for perceptual presence in normal perception, as well as its absence in synesthesia, and (2) operationalizes the notion of sensorimotor contingencies and their mastery. The core idea is that generative models underlying perception incorporate explicitly counterfactual elements related to how sensory inputs would change on the basis of a broad repertoire of possible actions, even if those actions are not performed. These “counterfactually-rich” generative models encode sensorimotor contingencies related to repertoires of sensorimotor dependencies, with counterfactual richness determining the degree of perceptual presence associated with a stimulus. While the generative models underlying normal perception are typically counterfactually rich (reflecting a large repertoire of possible sensorimotor dependencies), those underlying synesthetic concurrents are hypothesized to be counterfactually poor. In addition to accounting for the phenomenology of synesthesia, the theory naturally accommodates phenomenological differences between a range of experiential states including dreaming, hallucination, and the like. It may also lead to a new view of the (in)determinacy of normal perception. PMID:24446823

  1. Surprised at All the Entropy: Hippocampal, Caudate and Midbrain Contributions to Learning from Prediction Errors

    PubMed Central

    Schiffer, Anne-Marike; Ahlheim, Christiane; Wurm, Moritz F.; Schubotz, Ricarda I.

    2012-01-01

    Influential concepts in neuroscientific research cast the brain a predictive machine that revises its predictions when they are violated by sensory input. This relates to the predictive coding account of perception, but also to learning. Learning from prediction errors has been suggested for take place in the hippocampal memory system as well as in the basal ganglia. The present fMRI study used an action-observation paradigm to investigate the contributions of the hippocampus, caudate nucleus and midbrain dopaminergic system to different types of learning: learning in the absence of prediction errors, learning from prediction errors, and responding to the accumulation of prediction errors in unpredictable stimulus configurations. We conducted analyses of the regions of interests' BOLD response towards these different types of learning, implementing a bootstrapping procedure to correct for false positives. We found both, caudate nucleus and the hippocampus to be activated by perceptual prediction errors. The hippocampal responses seemed to relate to the associative mismatch between a stored representation and current sensory input. Moreover, its response was significantly influenced by the average information, or Shannon entropy of the stimulus material. In accordance with earlier results, the habenula was activated by perceptual prediction errors. Lastly, we found that the substantia nigra was activated by the novelty of sensory input. In sum, we established that the midbrain dopaminergic system, the hippocampus, and the caudate nucleus were to different degrees significantly involved in the three different types of learning: acquisition of new information, learning from prediction errors and responding to unpredictable stimulus developments. We relate learning from perceptual prediction errors to the concept of predictive coding and related information theoretic accounts. PMID:22570715

  2. Perceptual integration of acoustic cues to laryngeal contrasts in Korean fricatives.

    PubMed

    Lee, Sarah; Katz, Jonah

    2016-02-01

    This paper provides evidence that multiple acoustic cues involving the presence of low-frequency energy integrate in the perception of Korean coronal fricatives. This finding helps explain a surprising asymmetry between the production and perception of these fricatives found in previous studies: lower F0 onset in the following vowel leads to a response bias for plain [s] over fortis [s*], despite the fact that there is no evidence for a corresponding acoustic asymmetry in the production of [s] and [s*]. A fixed classification task using the Garner paradigm provides evidence that low F0 in a following vowel and the presence of voicing during frication perceptually integrate. This suggests that Korean listeners in previous experiments were responding to an "intermediate perceptual property" of stimuli, despite the fact that the individual acoustic components of that property are not all present in typical Korean fricative productions. The finding also broadens empirical support for the general idea of perceptual integration to a language, a different manner of consonant, and a situation where covariance of the acoustic cues under investigation is not generally present in a listener's linguistic input.

  3. The involvement of central attention in visual search is determined by task demands.

    PubMed

    Han, Suk Won

    2017-04-01

    Attention, the mechanism by which a subset of sensory inputs is prioritized over others, operates at multiple processing stages. Specifically, attention enhances weak sensory signal at the perceptual stage, while it serves to select appropriate responses or consolidate sensory representations into short-term memory at the central stage. This study investigated the independence and interaction between perceptual and central attention. To do so, I used a dual-task paradigm, pairing a four-alternative choice task with a visual search task. The results showed that central attention for response selection was engaged in perceptual processing for visual search when the number of search items increased, thereby increasing the demand for serial allocation of focal attention. By contrast, central attention and perceptual attention remained independent as far as the demand for serial shifting of focal attention remained constant; decreasing stimulus contrast or increasing the set size of a parallel search did not evoke the involvement of central attention in visual search. These results suggest that the nature of concurrent visual search process plays a crucial role in the functional interaction between two different types of attention.

  4. On the Perceptual Subprocess of Absolute Pitch.

    PubMed

    Kim, Seung-Goo; Knösche, Thomas R

    2017-01-01

    Absolute pitch (AP) is the rare ability of musicians to identify the pitch of tonal sound without external reference. While there have been behavioral and neuroimaging studies on the characteristics of AP, how the AP is implemented in human brains remains largely unknown. AP can be viewed as comprising of two subprocesses: perceptual (processing auditory input to extract a pitch chroma) and associative (linking an auditory representation of pitch chroma with a verbal/non-verbal label). In this review, we focus on the nature of the perceptual subprocess of AP. Two different models on how the perceptual subprocess works have been proposed: either via absolute pitch categorization (APC) or based on absolute pitch memory (APM). A major distinction between the two views is that whether the AP uses unique auditory processing (i.e., APC) that exists only in musicians with AP or it is rooted in a common phenomenon (i.e., APM), only with heightened efficiency. We review relevant behavioral and neuroimaging evidence that supports each notion. Lastly, we list open questions and potential ideas to address them.

  5. On the Perceptual Subprocess of Absolute Pitch

    PubMed Central

    Kim, Seung-Goo; Knösche, Thomas R.

    2017-01-01

    Absolute pitch (AP) is the rare ability of musicians to identify the pitch of tonal sound without external reference. While there have been behavioral and neuroimaging studies on the characteristics of AP, how the AP is implemented in human brains remains largely unknown. AP can be viewed as comprising of two subprocesses: perceptual (processing auditory input to extract a pitch chroma) and associative (linking an auditory representation of pitch chroma with a verbal/non-verbal label). In this review, we focus on the nature of the perceptual subprocess of AP. Two different models on how the perceptual subprocess works have been proposed: either via absolute pitch categorization (APC) or based on absolute pitch memory (APM). A major distinction between the two views is that whether the AP uses unique auditory processing (i.e., APC) that exists only in musicians with AP or it is rooted in a common phenomenon (i.e., APM), only with heightened efficiency. We review relevant behavioral and neuroimaging evidence that supports each notion. Lastly, we list open questions and potential ideas to address them. PMID:29085275

  6. Neural Correlates of Temporal Complexity and Synchrony during Audiovisual Correspondence Detection.

    PubMed

    Baumann, Oliver; Vromen, Joyce M G; Cheung, Allen; McFadyen, Jessica; Ren, Yudan; Guo, Christine C

    2018-01-01

    We often perceive real-life objects as multisensory cues through space and time. A key challenge for audiovisual integration is to match neural signals that not only originate from different sensory modalities but also that typically reach the observer at slightly different times. In humans, complex, unpredictable audiovisual streams lead to higher levels of perceptual coherence than predictable, rhythmic streams. In addition, perceptual coherence for complex signals seems less affected by increased asynchrony between visual and auditory modalities than for simple signals. Here, we used functional magnetic resonance imaging to determine the human neural correlates of audiovisual signals with different levels of temporal complexity and synchrony. Our study demonstrated that greater perceptual asynchrony and lower signal complexity impaired performance in an audiovisual coherence-matching task. Differences in asynchrony and complexity were also underpinned by a partially different set of brain regions. In particular, our results suggest that, while regions in the dorsolateral prefrontal cortex (DLPFC) were modulated by differences in memory load due to stimulus asynchrony, areas traditionally thought to be involved in speech production and recognition, such as the inferior frontal and superior temporal cortex, were modulated by the temporal complexity of the audiovisual signals. Our results, therefore, indicate specific processing roles for different subregions of the fronto-temporal cortex during audiovisual coherence detection.

  7. Neural Correlates of Temporal Complexity and Synchrony during Audiovisual Correspondence Detection

    PubMed Central

    Ren, Yudan

    2018-01-01

    Abstract We often perceive real-life objects as multisensory cues through space and time. A key challenge for audiovisual integration is to match neural signals that not only originate from different sensory modalities but also that typically reach the observer at slightly different times. In humans, complex, unpredictable audiovisual streams lead to higher levels of perceptual coherence than predictable, rhythmic streams. In addition, perceptual coherence for complex signals seems less affected by increased asynchrony between visual and auditory modalities than for simple signals. Here, we used functional magnetic resonance imaging to determine the human neural correlates of audiovisual signals with different levels of temporal complexity and synchrony. Our study demonstrated that greater perceptual asynchrony and lower signal complexity impaired performance in an audiovisual coherence-matching task. Differences in asynchrony and complexity were also underpinned by a partially different set of brain regions. In particular, our results suggest that, while regions in the dorsolateral prefrontal cortex (DLPFC) were modulated by differences in memory load due to stimulus asynchrony, areas traditionally thought to be involved in speech production and recognition, such as the inferior frontal and superior temporal cortex, were modulated by the temporal complexity of the audiovisual signals. Our results, therefore, indicate specific processing roles for different subregions of the fronto-temporal cortex during audiovisual coherence detection. PMID:29354682

  8. Modelling Peri-Perceptual Brain Processes in a Deep Learning Spiking Neural Network Architecture.

    PubMed

    Gholami Doborjeh, Zohreh; Kasabov, Nikola; Gholami Doborjeh, Maryam; Sumich, Alexander

    2018-06-11

    Familiarity of marketing stimuli may affect consumer behaviour at a peri-perceptual processing level. The current study introduces a method for deep learning of electroencephalogram (EEG) data using a spiking neural network (SNN) approach that reveals the complexity of peri-perceptual processes of familiarity. The method is applied to data from 20 participants viewing familiar and unfamiliar logos. The results support the potential of SNN models as novel tools in the exploration of peri-perceptual mechanisms that respond differentially to familiar and unfamiliar stimuli. Specifically, the activation pattern of the time-locked response identified by the proposed SNN model at approximately 200 milliseconds post-stimulus suggests greater connectivity and more widespread dynamic spatio-temporal patterns for familiar than unfamiliar logos. The proposed SNN approach can be applied to study other peri-perceptual or perceptual brain processes in cognitive and computational neuroscience.

  9. Independent psychophysical measurement of experimental modulations in the somatotopy of cutaneous heat-pain stimuli.

    PubMed

    Trojan, Jörg; Kleinböhl, Dieter; Stolle, Annette M; Andersen, Ole K; Hölzl, Rupert; Arendt-Nielsen, Lars

    2009-03-01

    Distortions of the body image have been repeatedly reported for various clinical conditions, but direct experimental analyses of the perceptual changes involved are still scarce. In addition, most experimental studies rely on cerebral activation patterns to assess neuroplastic changes in central representation, although the relationship between cerebral topography and the topology of the perceptual space is not clear. This study examines whether the direct psychophysical mapping approach we introduced recently (Trojan et al., Brain Res 2006;1120:106-113) is capable of tracking perceptual distortions in the somatotopic representation of heat-pain stimuli. Eleven healthy participants indicated the perceived positions of CO(2) laser stimuli, repetitively presented to the dorsal forearm, with a 3D tracking system in two consecutive sessions, separated by the topical application of capsaicin cream. In line with earlier reports, we expected that the resulting individual perceptual maps (i.e., one-dimensional projections of the perceived positions onto the forearm surface) would be subject to modulation through the altered sensory input, to be measured in terms of altered topological parameters. We found that the topology and metrics of the somatotopic representation were well preserved in the second session, but that the perceptual map was compressed to a smaller range in 9 out of 11 participants. By providing dimensional measures of perceptual representations, perceptual maps constitute an independent, genuinely psychological complement to the topography of cortical activations measured with neuroimaging methods. In addition, we expect them to be useful in diagnosing pathological changes in body perception accompanying chronic pain and other disorders.

  10. Moving the eye of the beholder. Motor components in vision determine aesthetic preference.

    PubMed

    Topolinski, Sascha

    2010-09-01

    Perception entails not only sensory input (e.g., merely seeing), but also subsidiary motor processes (e.g., moving the eyes); such processes have been neglected in research on aesthetic preferences. To fill this gap, the present research manipulated the fluency of perceptual motor processes independently from sensory input and predicted that this increased fluency would result in increased aesthetic preference for stimulus movements that elicited the same motor movements as had been previously trained. Specifically, addressing the muscles that move the eyes, I trained participants to follow a stimulus movement without actually seeing it. Experiment 1 demonstrated that ocular-muscle training resulted in the predicted increase in preference for trained stimulus movements compared with untrained stimulus movements, although participants had not previously seen any of the movements. Experiments 2 and 3 showed that actual motor matching and not perceptual similarity drove this effect. Thus, beauty may be not only in the eye of the beholder, but also in the eyes' movements.

  11. A model of color vision with a robot system

    NASA Astrophysics Data System (ADS)

    Wang, Haihui

    2006-01-01

    In this paper, we propose to generalize the saccade target method and state that perceptual stability in general arises by learning the effects one's actions have on sensor responses. The apparent visual stability of color percept across saccadic eye movements can be explained by positing that perception involves observing how sensory input changes in response to motor activities. The changes related to self-motion can be learned, and once learned, used to form stable percepts. The variation of sensor data in response to a motor act is therefore a requirement for stable perception rather than something that has to be compensated for in order to perceive a stable world. In this paper, we have provided a simple implementation of this sensory-motor contingency view of perceptual stability. We showed how a straightforward application of the temporal difference enhancement learning technique yielding color percepts that are stable across saccadic eye movements, even though the raw sensor input may change radically.

  12. Visual prediction and perceptual expertise

    PubMed Central

    Cheung, Olivia S.; Bar, Moshe

    2012-01-01

    Making accurate predictions about what may happen in the environment requires analogies between perceptual input and associations in memory. These elements of predictions are based on cortical representations, but little is known about how these processes can be enhanced by experience and training. On the other hand, studies on perceptual expertise have revealed that the acquisition of expertise leads to strengthened associative processing among features or objects, suggesting that predictions and expertise may be tightly connected. Here we review the behavioral and neural findings regarding the mechanisms involving prediction and expert processing, and highlight important possible overlaps between them. Future investigation should examine the relations among perception, memory and prediction skills as a function of expertise. The knowledge gained by this line of research will have implications for visual cognition research, and will advance our understanding of how the human brain can improve its ability to predict by learning from experience. PMID:22123523

  13. Semantic effects in naming perceptual identification but not in delayed naming: implications for models and tasks.

    PubMed

    Wurm, Lee H; Seaman, Sean R

    2008-03-01

    Previous research has demonstrated that the subjective danger and usefulness of words affect lexical decision times. Usually, an interaction is found: Increasing danger predicts faster reaction times (RTs) for words low on usefulness, but increasing danger predicts slower RTs for words high on usefulness. The authors show the same interaction with immediate auditory naming. The interaction disappeared with a delayed auditory naming control experiment, suggesting that it has a perceptual basis. In an attempt to separate input (signal to ear) from output (brain to muscle) processes in word recognition, the authors ran 2 auditory perceptual identification experiments. The interaction was again significant, but performance was best for words high on both danger and usefulness. This suggests that initial demonstrations of the interaction were reflecting an output approach/withdraw response conflict induced by stimuli that are both dangerous and useful. The interaction cannot be characterized as a tradeoff of speed versus accuracy.

  14. Electrophysiological evidence for early perceptual facilitation and efficient categorization of self-related stimuli during an Implicit Association Test measuring neuroticism.

    PubMed

    Fleischhauer, Monika; Strobel, Alexander; Diers, Kersten; Enge, Sören

    2014-02-01

    The Implicit Association Test (IAT) is a widely used latency-based categorization task that indirectly measures the strength of automatic associations between target and attribute concepts. So far, little is known about the perceptual and cognitive processes underlying personality IATs. Thus, the present study examined event-related potential indices during the execution of an IAT measuring neuroticism (N  =  70). The IAT effect was strongly modulated by the P1 component indicating early facilitation of relevant visual input and by a P3b-like late positive component reflecting the efficacy of stimulus categorization. Both components covaried, and larger amplitudes led to faster responses. The results suggest a relationship between early perceptual and semantic processes operating at a more automatic, implicit level and later decision-related categorization of self-relevant stimuli contributing to the IAT effect. Copyright © 2013 Society for Psychophysiological Research.

  15. V1 orientation plasticity is explained by broadly tuned feedforward inputs and intracortical sharpening.

    PubMed

    Teich, Andrew F; Qian, Ning

    2010-03-01

    Orientation adaptation and perceptual learning change orientation tuning curves of V1 cells. Adaptation shifts tuning curve peaks away from the adapted orientation, reduces tuning curve slopes near the adapted orientation, and increases the responses on the far flank of tuning curves. Learning an orientation discrimination task increases tuning curve slopes near the trained orientation. These changes have been explained previously in a recurrent model (RM) of orientation selectivity. However, the RM generates only complex cells when they are well tuned, so that there is currently no model of orientation plasticity for simple cells. In addition, some feedforward models, such as the modified feedforward model (MFM), also contain recurrent cortical excitation, and it is unknown whether they can explain plasticity. Here, we compare plasticity in the MFM, which simulates simple cells, and a recent modification of the RM (MRM), which displays a continuum of simple-to-complex characteristics. Both pre- and postsynaptic-based modifications of the recurrent and feedforward connections in the models are investigated. The MRM can account for all the learning- and adaptation-induced plasticity, for both simple and complex cells, while the MFM cannot. The key features from the MRM required for explaining plasticity are broadly tuned feedforward inputs and sharpening by a Mexican hat intracortical interaction profile. The mere presence of recurrent cortical interactions in feedforward models like the MFM is insufficient; such models have more rigid tuning curves. We predict that the plastic properties must be absent for cells whose orientation tuning arises from a feedforward mechanism.

  16. Relating color working memory and color perception.

    PubMed

    Allred, Sarah R; Flombaum, Jonathan I

    2014-11-01

    Color is the most frequently studied feature in visual working memory (VWM). Oddly, much of this work de-emphasizes perception, instead making simplifying assumptions about the inputs served to memory. We question these assumptions in light of perception research, and we identify important points of contact between perception and working memory in the case of color. Better characterization of its perceptual inputs will be crucial for elucidating the structure and function of VWM. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Seeing is being

    Treesearch

    Philip Merrifield

    1977-01-01

    Aspects of perceptual development in children are reviewed, and implications drawn for nurturing spatial abilities in urban environments. Emphasis is placed on the visual complexities of man-made urban surroundings, and their utilization in training. Further, attention is drawn to the individual child's imagination as a resource in developing his perceptual...

  18. The cerebellum and visual perceptual learning: evidence from a motion extrapolation task.

    PubMed

    Deluca, Cristina; Golzar, Ashkan; Santandrea, Elisa; Lo Gerfo, Emanuele; Eštočinová, Jana; Moretto, Giuseppe; Fiaschi, Antonio; Panzeri, Marta; Mariotti, Caterina; Tinazzi, Michele; Chelazzi, Leonardo

    2014-09-01

    Visual perceptual learning is widely assumed to reflect plastic changes occurring along the cerebro-cortical visual pathways, including at the earliest stages of processing, though increasing evidence indicates that higher-level brain areas are also involved. Here we addressed the possibility that the cerebellum plays an important role in visual perceptual learning. Within the realm of motor control, the cerebellum supports learning of new skills and recalibration of motor commands when movement execution is consistently perturbed (adaptation). Growing evidence indicates that the cerebellum is also involved in cognition and mediates forms of cognitive learning. Therefore, the obvious question arises whether the cerebellum might play a similar role in learning and adaptation within the perceptual domain. We explored a possible deficit in visual perceptual learning (and adaptation) in patients with cerebellar damage using variants of a novel motion extrapolation, psychophysical paradigm. Compared to their age- and gender-matched controls, patients with focal damage to the posterior (but not the anterior) cerebellum showed strongly diminished learning, in terms of both rate and amount of improvement over time. Consistent with a double-dissociation pattern, patients with focal damage to the anterior cerebellum instead showed more severe clinical motor deficits, indicative of a distinct role of the anterior cerebellum in the motor domain. The collected evidence demonstrates that a pure form of slow-incremental visual perceptual learning is crucially dependent on the intact cerebellum, bearing the notion that the human cerebellum acts as a learning device for motor, cognitive and perceptual functions. We interpret the deficit in terms of an inability to fine-tune predictive models of the incoming flow of visual perceptual input over time. Moreover, our results suggest a strong dissociation between the role of different portions of the cerebellum in motor versus non-motor functions, with only the posterior lobe being responsible for learning in the perceptual domain. Copyright © 2014. Published by Elsevier Ltd.

  19. Cortical Plasticity and Olfactory Function in Early Blindness

    PubMed Central

    Araneda, Rodrigo; Renier, Laurent A.; Rombaux, Philippe; Cuevas, Isabel; De Volder, Anne G.

    2016-01-01

    Over the last decade, functional brain imaging has provided insight to the maturation processes and has helped elucidate the pathophysiological mechanisms involved in brain plasticity in the absence of vision. In case of congenital blindness, drastic changes occur within the deafferented “visual” cortex that starts receiving and processing non visual inputs, including olfactory stimuli. This functional reorganization of the occipital cortex gives rise to compensatory perceptual and cognitive mechanisms that help blind persons achieve perceptual tasks, leading to superior olfactory abilities in these subjects. This view receives support from psychophysical testing, volumetric measurements and functional brain imaging studies in humans, which are presented here. PMID:27625596

  20. Motor–sensory convergence in object localization: a comparative study in rats and humans

    PubMed Central

    Horev, Guy; Saig, Avraham; Knutsen, Per Magne; Pietr, Maciej; Yu, Chunxiu; Ahissar, Ehud

    2011-01-01

    In order to identify basic aspects in the process of tactile perception, we trained rats and humans in similar object localization tasks and compared the strategies used by the two species. We found that rats integrated temporally related sensory inputs (‘temporal inputs’) from early whisk cycles with spatially related inputs (‘spatial inputs’) to align their whiskers with the objects; their perceptual reports appeared to be based primarily on this spatial alignment. In a similar manner, human subjects also integrated temporal and spatial inputs, but relied mainly on temporal inputs for object localization. These results suggest that during tactile object localization, an iterative motor–sensory process gradually converges on a stable percept of object location in both species. PMID:21969688

  1. Guidelines for Identifying Students with Perceptual/Communicative Disabilities.

    ERIC Educational Resources Information Center

    Colorado State Dept. of Education, Denver. Special Education Services Unit.

    This handbook is designed for use by trained educational diagnosticians and special educators as they go about the complex task of identifying students with perceptual/communicative disabilities (PCD) and determining eligibility for special education services. Section 1 of the text discusses federal criteria for determining the existence of a…

  2. Spatial-area selective retrieval of multiple object-place associations in a hierarchical cognitive map formed by theta phase coding.

    PubMed

    Sato, Naoyuki; Yamaguchi, Yoko

    2009-06-01

    The human cognitive map is known to be hierarchically organized consisting of a set of perceptually clustered landmarks. Patient studies have demonstrated that these cognitive maps are maintained by the hippocampus, while the neural dynamics are still poorly understood. The authors have shown that the neural dynamic "theta phase precession" observed in the rodent hippocampus may be capable of forming hierarchical cognitive maps in humans. In the model, a visual input sequence consisting of object and scene features in the central and peripheral visual fields, respectively, results in the formation of a hierarchical cognitive map for object-place associations. Surprisingly, it is possible for such a complex memory structure to be formed in a few seconds. In this paper, we evaluate the memory retrieval of object-place associations in the hierarchical network formed by theta phase precession. The results show that multiple object-place associations can be retrieved with the initial cue of a scene input. Importantly, according to the wide-to-narrow unidirectional connections among scene units, the spatial area for object-place retrieval can be controlled by the spatial area of the initial cue input. These results indicate that the hierarchical cognitive maps have computational advantages on a spatial-area selective retrieval of multiple object-place associations. Theta phase precession dynamics is suggested as a fundamental neural mechanism of the human cognitive map.

  3. Attentional capture under high perceptual load.

    PubMed

    Cosman, Joshua D; Vecera, Shaun P

    2010-12-01

    Attentional capture by abrupt onsets can be modulated by several factors, including the complexity, or perceptual load, of a scene. We have recently demonstrated that observers are less likely to be captured by abruptly appearing, task-irrelevant stimuli when they perform a search that is high, as opposed to low, in perceptual load (Cosman & Vecera, 2009), consistent with perceptual load theory. However, recent results indicate that onset frequency can influence stimulus-driven capture, with infrequent onsets capturing attention more often than did frequent onsets. Importantly, in our previous task, an abrupt onset was present on every trial, and consequently, attentional capture might have been affected by both onset frequency and perceptual load. In the present experiment, we examined whether onset frequency influences attentional capture under conditions of high perceptual load. When onsets were presented frequently, we replicated our earlier results; attentional capture by onsets was modulated under conditions of high perceptual load. Importantly, however, when onsets were presented infrequently, we observed robust capture effects. These results conflict with a strong form of load theory and, instead, suggest that exposure to the elements of a task (e.g., abrupt onsets) combines with high perceptual load to modulate attentional capture by task-irrelevant information.

  4. Administrators in Wonderland: Leadership through the New Sciences.

    ERIC Educational Resources Information Center

    Slowinski, Joseph

    Recent theories associated with physical reality have increasingly been adapted as social-science paradigms. Chaos Theory and Perceptual Control Theory (PCT) are two advances that are applicable to the educational administration field. According to Edward Lorenz's Chaos Theory, profound changes in outcome can arise from small variations of input.…

  5. Schemata as a Reading Strategy.

    ERIC Educational Resources Information Center

    Mustapha, Zaliha

    Reading is a multileveled, interactive, and hypothesis-generating process in which readers construct a meaningful representation of text by using their knowledge of the world and of language. If reading involves grasping the significance of an input depending on the reader's mental cognitive-perceptual situation, then there is a form of background…

  6. To Honor Fechner and Obey Stevens: Relationships between Psychophysical and Neural Nonlinearities

    ERIC Educational Resources Information Center

    Billock, Vincent A.; Tsou, Brian H.

    2011-01-01

    G. T. Fechner (1860/1966) famously described two kinds of psychophysics: "Outer psychophysics" captures the black box relationship between sensory inputs and perceptual magnitudes, whereas "inner psychophysics" contains the neural transformations that Fechner's outer psychophysics elided. The relationship between the two has never been clear.…

  7. Sociocultural Input Facilitates Children's Developing Understanding of Extraordinary Minds

    ERIC Educational Resources Information Center

    Lane, Jonathan D.; Wellman, Henry M.; Evans, E. Margaret

    2012-01-01

    Three- to 5-year-old (N = 61) religiously schooled preschoolers received theory-of-mind (ToM) tasks about the mental states of ordinary humans and agents with exceptional perceptual or mental capacities. Consistent with an anthropomorphism hypothesis, children beginning to appreciate limitations of human minds (e.g., ignorance) attributed those…

  8. Recognition-by-Components: A Theory of Human Image Understanding.

    ERIC Educational Resources Information Center

    Biederman, Irving

    1987-01-01

    The theory proposed (recognition-by-components) hypothesizes the perceptual recognition of objects to be a process in which the image of the input is segmented at regions of deep concavity into an arrangement of simple geometric components. Experiments on the perception of briefly presented pictures support the theory. (Author/LMO)

  9. Transfer of perceptual learning between different visual tasks

    PubMed Central

    McGovern, David P.; Webb, Ben S.; Peirce, Jonathan W.

    2012-01-01

    Practice in most sensory tasks substantially improves perceptual performance. A hallmark of this ‘perceptual learning' is its specificity for the basic attributes of the trained stimulus and task. Recent studies have challenged the specificity of learned improvements, although transfer between substantially different tasks has yet to be demonstrated. Here, we measure the degree of transfer between three distinct perceptual tasks. Participants trained on an orientation discrimination, a curvature discrimination, or a ‘global form' task, all using stimuli comprised of multiple oriented elements. Before and after training they were tested on all three and a contrast discrimination control task. A clear transfer of learning was observed, in a pattern predicted by the relative complexity of the stimuli in the training and test tasks. Our results suggest that sensory improvements derived from perceptual learning can transfer between very different visual tasks. PMID:23048211

  10. Transfer of perceptual learning between different visual tasks.

    PubMed

    McGovern, David P; Webb, Ben S; Peirce, Jonathan W

    2012-10-09

    Practice in most sensory tasks substantially improves perceptual performance. A hallmark of this 'perceptual learning' is its specificity for the basic attributes of the trained stimulus and task. Recent studies have challenged the specificity of learned improvements, although transfer between substantially different tasks has yet to be demonstrated. Here, we measure the degree of transfer between three distinct perceptual tasks. Participants trained on an orientation discrimination, a curvature discrimination, or a 'global form' task, all using stimuli comprised of multiple oriented elements. Before and after training they were tested on all three and a contrast discrimination control task. A clear transfer of learning was observed, in a pattern predicted by the relative complexity of the stimuli in the training and test tasks. Our results suggest that sensory improvements derived from perceptual learning can transfer between very different visual tasks.

  11. Experimental orofacial pain and sensory deprivation lead to perceptual distortion of the face in healthy volunteers.

    PubMed

    Dagsdóttir, Lilja Kristín; Skyt, Ina; Vase, Lene; Baad-Hansen, Lene; Castrillon, Eduardo; Svensson, Peter

    2015-09-01

    Patients suffering from persistent orofacial pain may sporadically report that the painful area feels "swollen" or "differently," a phenomenon that may be conceptualized as a perceptual distortion because there are no clinical signs of swelling present. Our aim was to investigate whether standardized experimental pain and sensory deprivation of specific orofacial test sites would lead to changes in the size perception of these face areas. Twenty-four healthy participants received either 0.2 mL hypertonic saline (HS) or local anesthetics (LA) into six regions (buccal, mental, lingual, masseter muscle, infraorbital and auriculotemporal nerve regions). Participants estimated the perceived size changes in percentage (0 % = no change, -100 % = half the size or +100 % = double the size), and somatosensory function was checked with tactile stimuli. The pain intensity was rated on a 0-10 Verbal Numerical Rating Scale (VNRS), and sets of psychological questionnaires were completed. HS and LA were associated with significant self-reported perceptual distortions as indicated by consistent increases in perceived size of the adjacent face areas (P ≤ 0.050). Perceptual distortion was most pronounced in the buccal region, and the smallest increase was observed in the auriculotemporal region. HS was associated with moderate levels of pain VNRS = 7.3 ± 0.6. Weak correlations were found between HS-evoked perceptual distortion and level of dissociation in two regions (P < 0.050). Experimental pain and transient sensory deprivation evoked perceptual distortions in all face regions and overall demonstrated the importance of afferent inputs for the perception of the face. We propose that perceptual distortion may be an important phenomenon to consider in persistent orofacial pain conditions.

  12. Cerebellar contributions to motor timing: a PET study of auditory and visual rhythm reproduction.

    PubMed

    Penhune, V B; Zattore, R J; Evans, A C

    1998-11-01

    The perception and production of temporal patterns, or rhythms, is important for both music and speech. However, the way in which the human brain achieves accurate timing of perceptual input and motor output is as yet little understood. Central control of both motor timing and perceptual timing across modalities has been linked to both the cerebellum and the basal ganglia (BG). The present study was designed to test the hypothesized central control of temporal processing and to examine the roles of the cerebellum, BG, and sensory association areas. In this positron emission tomography (PET) activation paradigm, subjects reproduced rhythms of increasing temporal complexity that were presented separately in the auditory and visual modalities. The results provide support for a supramodal contribution of the lateral cerebellar cortex and cerebellar vermis to the production of a timed motor response, particularly when it is complex and/or novel. The results also give partial support to the involvement of BG structures in motor timing, although this may be more directly related to implementation of the motor response than to timing per se. Finally, sensory association areas and the ventrolateral frontal cortex were found to be involved in modality-specific encoding and retrieval of the temporal stimuli. Taken together, these results point to the participation of a number of neural structures in the production of a timed motor response from an external stimulus. The role of the cerebellum in timing is conceptualized not as a clock or counter but simply as the structure that provides the necessary circuitry for the sensory system to extract temporal information and for the motor system to learn to produce a precisely timed response.

  13. Sing that Tune: Infants’ Perception of Melody and Lyrics and the Facilitation of Phonetic Recognition in Songs

    PubMed Central

    Lebedeva, Gina C.; Kuhl, Patricia K.

    2010-01-01

    To better understand how infants process complex auditory input, this study investigated whether 11-month-old infants perceive the pitch (melodic) or the phonetic (lyric) components within songs as more salient, and whether melody facilitates phonetic recognition. Using a preferential looking paradigm, uni-dimensional and multi-dimensional songs were tested; either the pitch or syllable order of the stimuli varied. As a group, infants detected a change in pitch order in a 4-note sequence when the syllables were redundant (Experiment 1), but did not detect the identical pitch change with variegated syllables (Experiment 2). Infants were better able to detect a change in syllable order in a sung sequence (Experiment 2) than the identical syllable change in a spoken sequence (Experiment 1). These results suggest that by 11 months, infants cannot “ignore” phonetic information in the context of perceptually salient pitch variation. Moreover, the increased phonetic recognition in song contexts mirrors findings that demonstrate advantages of infant-directed speech. Findings are discussed in terms of how stimulus complexity interacts with the perception of sung speech in infancy. PMID:20472295

  14. The felt presence of other minds: Predictive processing, counterfactual predictions, and mentalising in autism.

    PubMed

    Palmer, Colin J; Seth, Anil K; Hohwy, Jakob

    2015-11-01

    The mental states of other people are components of the external world that modulate the activity of our sensory epithelia. Recent probabilistic frameworks that cast perception as unconscious inference on the external causes of sensory input can thus be expanded to enfold the brain's representation of others' mental states. This paper examines this subject in the context of the debate concerning the extent to which we have perceptual awareness of other minds. In particular, we suggest that the notion of perceptual presence helps to refine this debate: are others' mental states experienced as veridical qualities of the perceptual world around us? This experiential aspect of social cognition may be central to conditions such as autism spectrum disorder, where representations of others' mental states seem to be selectively compromised. Importantly, recent work ties perceptual presence to the counterfactual predictions of hierarchical generative models that are suggested to perform unconscious inference in the brain. This enables a characterisation of mental state representations in terms of their associated counterfactual predictions, allowing a distinction between spontaneous and explicit forms of mentalising within the framework of predictive processing. This leads to a hypothesis that social cognition in autism spectrum disorder is characterised by a diminished set of counterfactual predictions and the reduced perceptual presence of others' mental states. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Visual contribution to the multistable perception of speech.

    PubMed

    Sato, Marc; Basirat, Anahita; Schwartz, Jean-Luc

    2007-11-01

    The multistable perception of speech, or verbal transformation effect, refers to perceptual changes experienced while listening to a speech form that is repeated rapidly and continuously. In order to test whether visual information from the speaker's articulatory gestures may modify the emergence and stability of verbal auditory percepts, subjects were instructed to report any perceptual changes during unimodal, audiovisual, and incongruent audiovisual presentations of distinct repeated syllables. In a first experiment, the perceptual stability of reported auditory percepts was significantly modulated by the modality of presentation. In a second experiment, when audiovisual stimuli consisting of a stable audio track dubbed with a video track that alternated between congruent and incongruent stimuli were presented, a strong correlation between the timing of perceptual transitions and the timing of video switches was found. Finally, a third experiment showed that the vocal tract opening onset event provided by the visual input could play the role of a bootstrap mechanism in the search for transformations. Altogether, these results demonstrate the capacity of visual information to control the multistable perception of speech in its phonetic content and temporal course. The verbal transformation effect thus provides a useful experimental paradigm to explore audiovisual interactions in speech perception.

  16. Parallel language activation and inhibitory control in bimodal bilinguals.

    PubMed

    Giezen, Marcel R; Blumenfeld, Henrike K; Shook, Anthony; Marian, Viorica; Emmorey, Karen

    2015-08-01

    Findings from recent studies suggest that spoken-language bilinguals engage nonlinguistic inhibitory control mechanisms to resolve cross-linguistic competition during auditory word recognition. Bilingual advantages in inhibitory control might stem from the need to resolve perceptual competition between similar-sounding words both within and between their two languages. If so, these advantages should be lessened or eliminated when there is no perceptual competition between two languages. The present study investigated the extent of inhibitory control recruitment during bilingual language comprehension by examining associations between language co-activation and nonlinguistic inhibitory control abilities in bimodal bilinguals, whose two languages do not perceptually compete. Cross-linguistic distractor activation was identified in the visual world paradigm, and correlated significantly with performance on a nonlinguistic spatial Stroop task within a group of 27 hearing ASL-English bilinguals. Smaller Stroop effects (indexing more efficient inhibition) were associated with reduced co-activation of ASL signs during the early stages of auditory word recognition. These results suggest that inhibitory control in auditory word recognition is not limited to resolving perceptual linguistic competition in phonological input, but is also used to moderate competition that originates at the lexico-semantic level. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Kinesthetic perception based on integration of motor imagery and afferent inputs from antagonistic muscles with tendon vibration.

    PubMed

    Shibata, E; Kaneko, F

    2013-04-29

    The perceptual integration of afferent inputs from two antagonistic muscles, or the perceptual integration of afferent input and motor imagery are related to the generation of a kinesthetic sensation. However, it has not been clarified how, or indeed whether, a kinesthetic perception would be generated by motor imagery if afferent inputs from two antagonistic muscles were simultaneously induced by tendon vibration. The purpose of this study was to investigate how a kinesthetic perception would be generated by motor imagery during co-vibration of the two antagonistic muscles at the same frequency. Healthy subjects participated in this experiment. Illusory movement was evoked by tendon vibration. Next, the subjects imaged wrist flexion movement simultaneously with tendon vibration. Wrist flexor and extensor muscles were vibrated according to 4 patterns such that the difference between the two vibration frequencies was zero. After each trial, the perceived movement sensations were quantified on the basis of the velocity and direction of the ipsilateral hand-tracking movements. When the difference in frequency applied to the wrist flexor and the extensor was 0Hz, no subjects perceived movements without motor imagery. However, during motor imagery, the flexion velocity of the perceived movement was higher than the flexion velocity without motor imagery. This study clarified that the afferent inputs from the muscle spindle interact with motor imagery, to evoke a kinesthetic perception, even when the difference in frequency applied to the wrist flexor and extensor was 0Hz. Furthermore, the kinesthetic perception resulting from integrations of vibration and motor imagery increased depending on the vibration frequency to the two antagonistic muscles. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Perceptual learning and human expertise

    NASA Astrophysics Data System (ADS)

    Kellman, Philip J.; Garrigan, Patrick

    2009-06-01

    We consider perceptual learning: experience-induced changes in the way perceivers extract information. Often neglected in scientific accounts of learning and in instruction, perceptual learning is a fundamental contributor to human expertise and is crucial in domains where humans show remarkable levels of attainment, such as language, chess, music, and mathematics. In Section 2, we give a brief history and discuss the relation of perceptual learning to other forms of learning. We consider in Section 3 several specific phenomena, illustrating the scope and characteristics of perceptual learning, including both discovery and fluency effects. We describe abstract perceptual learning, in which structural relationships are discovered and recognized in novel instances that do not share constituent elements or basic features. In Section 4, we consider primary concepts that have been used to explain and model perceptual learning, including receptive field change, selection, and relational recoding. In Section 5, we consider the scope of perceptual learning, contrasting recent research, focused on simple sensory discriminations, with earlier work that emphasized extraction of invariance from varied instances in more complex tasks. Contrary to some recent views, we argue that perceptual learning should not be confined to changes in early sensory analyzers. Phenomena at various levels, we suggest, can be unified by models that emphasize discovery and selection of relevant information. In a final section, we consider the potential role of perceptual learning in educational settings. Most instruction emphasizes facts and procedures that can be verbalized, whereas expertise depends heavily on implicit pattern recognition and selective extraction skills acquired through perceptual learning. We consider reasons why perceptual learning has not been systematically addressed in traditional instruction, and we describe recent successful efforts to create a technology of perceptual learning in areas such as aviation, mathematics, and medicine. Research in perceptual learning promises to advance scientific accounts of learning, and perceptual learning technology may offer similar promise in improving education.

  19. Syntax-induced pattern deafness

    PubMed Central

    Endress, Ansgar D.; Hauser, Marc D.

    2009-01-01

    Perceptual systems often force systematically biased interpretations upon sensory input. These interpretations are obligatory, inaccessible to conscious control, and prevent observers from perceiving alternative percepts. Here we report a similarly impenetrable phenomenon in the domain of language, where the syntactic system prevents listeners from detecting a simple perceptual pattern. Healthy human adults listened to three-word sequences conforming to patterns readily learned even by honeybees, rats, and sleeping human neonates. Specifically, sequences either started or ended with two words from the same syntactic category (e.g., noun–noun–verb or verb–verb–noun). Although participants readily processed the categories and learned repetition patterns over nonsyntactic categories (e.g., animal–animal–clothes), they failed to learn the repetition pattern over syntactic categories, even when explicitly instructed to look for it. Further experiments revealed that participants successfully learned the repetition patterns only when they were consistent with syntactically possible structures, irrespective of whether these structures were attested in English or in other languages unknown to the participants. When the repetition patterns did not match such syntactically possible structures, participants failed to learn them. Our results suggest that when human adults hear a string of nouns and verbs, their syntactic system obligatorily attempts an interpretation (e.g., in terms of subjects, objects, and predicates). As a result, subjects fail to perceive the simpler pattern of repetitions—a form of syntax-induced pattern deafness that is reminiscent of how other perceptual systems force specific interpretations upon sensory input. PMID:19920182

  20. Memory: Enduring Traces of Perceptual and Reflective Attention

    PubMed Central

    Chun, Marvin M.; Johnson, Marcia K.

    2011-01-01

    Attention and memory are typically studied as separate topics, but they are highly intertwined. Here we discuss the relation between memory and two fundamental types of attention: perceptual and reflective. Memory is the persisting consequence of cognitive activities initiated by and/or focused on external information from the environment (perceptual attention) and initiated by and/or focused on internal mental representations (reflective attention). We consider three key questions for advancing a cognitive neuroscience of attention and memory: To what extent do perception and reflection share representational areas? To what extent are the control processes that select, maintain, and manipulate perceptual and reflective information subserved by common areas and networks? During perception and reflection, to what extent are common areas responsible for binding features together to create complex, episodic memories and for reviving them later? Considering similarities and differences in perceptual and reflective attention helps integrate a broad range of findings and raises important unresolved issues. PMID:22099456

  1. Process and domain specificity in regions engaged for face processing: an fMRI study of perceptual differentiation.

    PubMed

    Collins, Heather R; Zhu, Xun; Bhatt, Ramesh S; Clark, Jonathan D; Joseph, Jane E

    2012-12-01

    The degree to which face-specific brain regions are specialized for different kinds of perceptual processing is debated. This study parametrically varied demands on featural, first-order configural, or second-order configural processing of faces and houses in a perceptual matching task to determine the extent to which the process of perceptual differentiation was selective for faces regardless of processing type (domain-specific account), specialized for specific types of perceptual processing regardless of category (process-specific account), engaged in category-optimized processing (i.e., configural face processing or featural house processing), or reflected generalized perceptual differentiation (i.e., differentiation that crosses category and processing type boundaries). ROIs were identified in a separate localizer run or with a similarity regressor in the face-matching runs. The predominant principle accounting for fMRI signal modulation in most regions was generalized perceptual differentiation. Nearly all regions showed perceptual differentiation for both faces and houses for more than one processing type, even if the region was identified as face-preferential in the localizer run. Consistent with process specificity, some regions showed perceptual differentiation for first-order processing of faces and houses (right fusiform face area and occipito-temporal cortex and right lateral occipital complex), but not for featural or second-order processing. Somewhat consistent with domain specificity, the right inferior frontal gyrus showed perceptual differentiation only for faces in the featural matching task. The present findings demonstrate that the majority of regions involved in perceptual differentiation of faces are also involved in differentiation of other visually homogenous categories.

  2. Process- and Domain-Specificity in Regions Engaged for Face Processing: An fMRI Study of Perceptual Differentiation

    PubMed Central

    Collins, Heather R.; Zhu, Xun; Bhatt, Ramesh S.; Clark, Jonathan D.; Joseph, Jane E.

    2015-01-01

    The degree to which face-specific brain regions are specialized for different kinds of perceptual processing is debated. The present study parametrically varied demands on featural, first-order configural or second-order configural processing of faces and houses in a perceptual matching task to determine the extent to which the process of perceptual differentiation was selective for faces regardless of processing type (domain-specific account), specialized for specific types of perceptual processing regardless of category (process-specific account), engaged in category-optimized processing (i.e., configural face processing or featural house processing) or reflected generalized perceptual differentiation (i.e. differentiation that crosses category and processing type boundaries). Regions of interest were identified in a separate localizer run or with a similarity regressor in the face-matching runs. The predominant principle accounting for fMRI signal modulation in most regions was generalized perceptual differentiation. Nearly all regions showed perceptual differentiation for both faces and houses for more than one processing type, even if the region was identified as face-preferential in the localizer run. Consistent with process-specificity, some regions showed perceptual differentiation for first-order processing of faces and houses (right fusiform face area and occipito-temporal cortex, and right lateral occipital complex), but not for featural or second-order processing. Somewhat consistent with domain-specificity, the right inferior frontal gyrus showed perceptual differentiation only for faces in the featural matching task. The present findings demonstrate that the majority of regions involved in perceptual differentiation of faces are also involved in differentiation of other visually homogenous categories. PMID:22849402

  3. Discovering Structure in Auditory Input: Evidence from Williams Syndrome

    ERIC Educational Resources Information Center

    Elsabbagh, Mayada; Cohen, Henri; Karmiloff-Smith, Annette

    2010-01-01

    We examined auditory perception in Williams syndrome by investigating strategies used in organizing sound patterns into coherent units. In Experiment 1, we investigated the streaming of sound sequences into perceptual units, on the basis of pitch cues, in a group of children and adults with Williams syndrome compared to typical controls. We showed…

  4. A System for Adaptive High-Variability Segmental Perceptual Training: Implementation, Effectiveness, Transfer

    ERIC Educational Resources Information Center

    Qian, Manman; Chukharev-Hudilainen, Evgeny; Levis, John

    2018-01-01

    Many types of L2 phonological perception are often difficult to acquire without instruction. These difficulties with perception may also be related to intelligibility in production. Instruction on perception contrasts is more likely to be successful with the use of phonetically variable input made available through computer-assisted pronunciation…

  5. The Interplay between Perceptual Organization and Categorization in the Representation of Complex Visual Patterns by Young Infants

    ERIC Educational Resources Information Center

    Quinn, Paul C.; Schyns, Philippe G.; Goldstone, Robert L.

    2006-01-01

    The relation between perceptual organization and categorization processes in 3- and 4-month-olds was explored. The question was whether an invariant part abstracted during category learning could interfere with Gestalt organizational processes. A 2003 study by Quinn and Schyns had reported that an initial category familiarization experience in…

  6. Lexical Categorization Modalities in Pre-School Children: Influence of Perceptual and Verbal Tasks

    ERIC Educational Resources Information Center

    Tallandini, Maria Anna; Roia, Anna

    2005-01-01

    This study investigates how categorical organization functions in pre-school children, focusing on the dichotomy between living and nonliving things. The variables of familiarity, frequency of word use and perceptual complexity were controlled. Sixty children aged between 4 years and 5 years 10 months were investigated. Three tasks were used: a…

  7. Visual speech information: a help or hindrance in perceptual processing of dysarthric speech.

    PubMed

    Borrie, Stephanie A

    2015-03-01

    This study investigated the influence of visual speech information on perceptual processing of neurologically degraded speech. Fifty listeners identified spastic dysarthric speech under both audio (A) and audiovisual (AV) conditions. Condition comparisons revealed that the addition of visual speech information enhanced processing of the neurologically degraded input in terms of (a) acuity (percent phonemes correct) of vowels and consonants and (b) recognition (percent words correct) of predictive and nonpredictive phrases. Listeners exploited stress-based segmentation strategies more readily in AV conditions, suggesting that the perceptual benefit associated with adding visual speech information to the auditory signal-the AV advantage-has both segmental and suprasegmental origins. Results also revealed that the magnitude of the AV advantage can be predicted, to some degree, by the extent to which an individual utilizes syllabic stress cues to inform word recognition in AV conditions. Findings inform the development of a listener-specific model of speech perception that applies to processing of dysarthric speech in everyday communication contexts.

  8. Influence of early attentional modulation on working memory

    PubMed Central

    Gazzaley, Adam

    2011-01-01

    It is now established that attention influences working memory (WM) at multiple processing stages. This liaison between attention and WM poses several interesting empirical questions. Notably, does attention impact WM via its influences on early perceptual processing? If so, what are the critical factors at play in this attention-perception-WM interaction. I review recent data from our laboratory utilizing a variety of techniques (electroencephalography (EEG), functional MRI (fMRI) and transcranial magnetic stimulation (TMS)), stimuli (features and complex objects), novel experimental paradigms, and research populations (younger and older adults), which converge to support the conclusion that top-down modulation of visual cortical activity at early perceptual processing stages (100–200 ms after stimulus onset) impacts subsequent WM performance. Factors that affect attentional control at this stage include cognitive load, task practice, perceptual training, and aging. These developments highlight the complex and dynamic relationships among perception, attention, and memory. PMID:21184764

  9. What can fish brains tell us about visual perception?

    PubMed Central

    Rosa Salva, Orsola; Sovrano, Valeria Anna; Vallortigara, Giorgio

    2014-01-01

    Fish are a complex taxonomic group, whose diversity and distance from other vertebrates well suits the comparative investigation of brain and behavior: in fish species we observe substantial differences with respect to the telencephalic organization of other vertebrates and an astonishing variety in the development and complexity of pallial structures. We will concentrate on the contribution of research on fish behavioral biology for the understanding of the evolution of the visual system. We shall review evidence concerning perceptual effects that reflect fundamental principles of the visual system functioning, highlighting the similarities and differences between distant fish groups and with other vertebrates. We will focus on perceptual effects reflecting some of the main tasks that the visual system must attain. In particular, we will deal with subjective contours and optical illusions, invariance effects, second order motion and biological motion and, finally, perceptual binding of object properties in a unified higher level representation. PMID:25324728

  10. Learning to Link Visual Contours

    PubMed Central

    Li, Wu; Piëch, Valentin; Gilbert, Charles D.

    2008-01-01

    SUMMARY In complex visual scenes, linking related contour elements is important for object recognition. This process, thought to be stimulus driven and hard wired, has substrates in primary visual cortex (V1). Here, however, we find contour integration in V1 to depend strongly on perceptual learning and top-down influences that are specific to contour detection. In naive monkeys the information about contours embedded in complex backgrounds is absent in V1 neuronal responses, and is independent of the locus of spatial attention. Training animals to find embedded contours induces strong contour-related responses specific to the trained retinotopic region. These responses are most robust when animals perform the contour detection task, but disappear under anesthesia. Our findings suggest that top-down influences dynamically adapt neural circuits according to specific perceptual tasks. This may serve as a general neuronal mechanism of perceptual learning, and reflect top-down mediated changes in cortical states. PMID:18255036

  11. Behavioral and electrophysiological evidence for early and automatic detection of phonological equivalence in variable speech inputs.

    PubMed

    Kharlamov, Viktor; Campbell, Kenneth; Kazanina, Nina

    2011-11-01

    Speech sounds are not always perceived in accordance with their acoustic-phonetic content. For example, an early and automatic process of perceptual repair, which ensures conformity of speech inputs to the listener's native language phonology, applies to individual input segments that do not exist in the native inventory or to sound sequences that are illicit according to the native phonotactic restrictions on sound co-occurrences. The present study with Russian and Canadian English speakers shows that listeners may perceive phonetically distinct and licit sound sequences as equivalent when the native language system provides robust evidence for mapping multiple phonetic forms onto a single phonological representation. In Russian, due to an optional but productive t-deletion process that affects /stn/ clusters, the surface forms [sn] and [stn] may be phonologically equivalent and map to a single phonological form /stn/. In contrast, [sn] and [stn] clusters are usually phonologically distinct in (Canadian) English. Behavioral data from identification and discrimination tasks indicated that [sn] and [stn] clusters were more confusable for Russian than for English speakers. The EEG experiment employed an oddball paradigm with nonwords [asna] and [astna] used as the standard and deviant stimuli. A reliable mismatch negativity response was elicited approximately 100 msec postchange in the English group but not in the Russian group. These findings point to a perceptual repair mechanism that is engaged automatically at a prelexical level to ensure immediate encoding of speech inputs in phonological terms, which in turn enables efficient access to the meaning of a spoken utterance.

  12. A Neural Model of Chromatic Induction in Uniform and Textured Images and Psychophysical Detection of Non-Opponent Chromatic Qualia

    ERIC Educational Resources Information Center

    Livitz, Gennady

    2011-01-01

    Color is a complex and rich perceptual phenomenon that relates physical properties of light to certain perceptual qualia associated with vision. Hering's opponent color theory, widely regarded as capturing the most fundamental aspects of color phenomenology, suggests that certain unique hues are mutually exclusive as components of a single color.…

  13. Uncovering productive morphosyntax in French-learning toddlers: a multidimensional methodology perspective.

    PubMed

    Barrière, Isabelle; Goyet, Louise; Kresh, Sarah; Legendre, Géraldine; Nazzi, Thierry

    2016-09-01

    The present study applies a multidimensional methodological approach to the study of the acquisition of morphosyntax. It focuses on evaluating the degree of productivity of an infrequent subject-verb agreement pattern in the early acquisition of French and considers the explanatory role played by factors such as input frequency, semantic transparency of the agreement markers, and perceptual factors in accounting for comprehension of agreement in number (singular vs. plural) in an experimental setting. Results on a pointing task involving pseudo-verbs demonstrate significant comprehension of both singular and plural agreement in children aged 2;6. The experimental results are shown not to reflect input frequency, input marker reliability on its own, or lexically driven knowledge. We conclude that toddlers have knowledge of subject-verb agreement at age 2;6 which is abstract and productive despite its paucity in the input.

  14. Dimension-Based Statistical Learning Affects Both Speech Perception and Production.

    PubMed

    Lehet, Matthew; Holt, Lori L

    2017-04-01

    Multiple acoustic dimensions signal speech categories. However, dimensions vary in their informativeness; some are more diagnostic of category membership than others. Speech categorization reflects these dimensional regularities such that diagnostic dimensions carry more "perceptual weight" and more effectively signal category membership to native listeners. Yet perceptual weights are malleable. When short-term experience deviates from long-term language norms, such as in a foreign accent, the perceptual weight of acoustic dimensions in signaling speech category membership rapidly adjusts. The present study investigated whether rapid adjustments in listeners' perceptual weights in response to speech that deviates from the norms also affects listeners' own speech productions. In a word recognition task, the correlation between two acoustic dimensions signaling consonant categories, fundamental frequency (F0) and voice onset time (VOT), matched the correlation typical of English, and then shifted to an "artificial accent" that reversed the relationship, and then shifted back. Brief, incidental exposure to the artificial accent caused participants to down-weight perceptual reliance on F0, consistent with previous research. Throughout the task, participants were intermittently prompted with pictures to produce these same words. In the block in which listeners heard the artificial accent with a reversed F0 × VOT correlation, F0 was a less robust cue to voicing in listeners' own speech productions. The statistical regularities of short-term speech input affect both speech perception and production, as evidenced via shifts in how acoustic dimensions are weighted. Copyright © 2016 Cognitive Science Society, Inc.

  15. Dimension-based statistical learning affects both speech perception and production

    PubMed Central

    Lehet, Matthew; Holt, Lori L.

    2016-01-01

    Multiple acoustic dimensions signal speech categories. However, dimensions vary in their informativeness; some are more diagnostic of category membership than others. Speech categorization reflects these dimensional regularities such that diagnostic dimensions carry more “perceptual weight” and more effectively signal category membership to native listeners. Yet, perceptual weights are malleable. When short-term experience deviates from long-term language norms, such as in a foreign accent, the perceptual weight of acoustic dimensions in signaling speech category membership rapidly adjusts. The present study investigated whether rapid adjustments in listeners’ perceptual weights in response to speech that deviates from the norms also affects listeners’ own speech productions. In a word recognition task, the correlation between two acoustic dimensions signaling consonant categories, fundamental frequency (F0) and voice onset time (VOT), matched the correlation typical of English, then shifted to an “artificial accent” that reversed the relationship, and then shifted back. Brief, incidental exposure to the artificial accent caused participants to down-weight perceptual reliance on F0, consistent with previous research. Throughout the task, participants were intermittently prompted with pictures to produce these same words. In the block in which listeners heard the artificial accent with a reversed F0 x VOT correlation, F0 was a less robust cue to voicing in listeners’ own speech productions. The statistical regularities of short-term speech input affect both speech perception and production, as evidenced via shifts in how acoustic dimensions are weighted. PMID:27666146

  16. The Interplay between Input and Initial Biases: Asymmetries in Vowel Perception during the First Year of Life

    ERIC Educational Resources Information Center

    Pons, Ferran; Albareda-Castellot, Barbara; Sebastian-Galles, Nuria

    2012-01-01

    Vowels with extreme articulatory-acoustic properties act as natural referents. Infant perceptual asymmetries point to an underlying bias favoring these referent vowels. However, as language experience is gathered, distributional frequency of speech sounds could modify this initial bias. The perception of the /i/-/e/ contrast was explored in 144…

  17. Gleaning Structure from Sound: The Role of Prosodic Contrast in Learning Non-Adjacent Dependencies

    ERIC Educational Resources Information Center

    Grama, Ileana C.; Kerkhoff, Annemarie; Wijnen, Frank

    2016-01-01

    The ability to detect non-adjacent dependencies (i.e. between "a" and "b" in "aXb") in spoken input may support the acquisition of morpho-syntactic dependencies (e.g. "The princess 'is' kiss'ing' the frog"). Functional morphemes in morpho-syntactic dependencies are often marked by perceptual cues that render…

  18. Effects of Variance and Input Distribution on the Training of L2 Learners' Tone Categorization

    ERIC Educational Resources Information Center

    Liu, Jiang

    2013-01-01

    Recent psycholinguistic findings showed that (a) a multi-modal phonetic training paradigm that encodes visual, interactive information is more effective in training L2 learners' perception of novel categories, (b) decreasing the acoustic variance of a phonetic dimension allows the learners to more effectively shift the perceptual weight towards…

  19. When a Dog Has a Pen for a Tail: The Time Course of Creative Object Processing

    ERIC Educational Resources Information Center

    Wang, Botao; Duan, Haijun; Qi, Senqing; Hu, Weiping; Zhang, Huan

    2017-01-01

    Creative objects differ from ordinary objects in that they are created by human beings to contain novel, creative information. Previous research has demonstrated that ordinary object processing involves both a perceptual process for analyzing different features of the visual input and a higher-order process for evaluating the relevance of this…

  20. An Advantage for Perceptual Edges in Young Infants' Memory for Speech

    ERIC Educational Resources Information Center

    Hochmann, Jean-Rémy; Langus, Alan; Mehler, Jacques

    2016-01-01

    Models of language acquisition are constrained by the information that learners can extract from their input. Experiment 1 investigated whether 3-month-old infants are able to encode a repeated, unsegmented sequence of five syllables. Event-related-potentials showed that infants reacted to a change of the initial or the final syllable, but not to…

  1. Visual Complexity in Orthographic Learning: Modeling Learning across Writing System Variations

    ERIC Educational Resources Information Center

    Chang, Li-Yun; Plaut, David C.; Perfetti, Charles A.

    2016-01-01

    The visual complexity of orthographies varies across writing systems. Prior research has shown that complexity strongly influences the initial stage of reading development: the perceptual learning of grapheme forms. This study presents a computational simulation that examines the degree to which visual complexity leads to grapheme learning…

  2. Relationship between perceptual learning in speech and statistical learning in younger and older adults

    PubMed Central

    Neger, Thordis M.; Rietveld, Toni; Janse, Esther

    2014-01-01

    Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly) draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech. In the present study, 73 older adults (aged over 60 years) and 60 younger adults (aged between 18 and 30 years) performed a visual artificial grammar learning task and were presented with 60 meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory, and processing speed). Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly. PMID:25225475

  3. Relationship between perceptual learning in speech and statistical learning in younger and older adults.

    PubMed

    Neger, Thordis M; Rietveld, Toni; Janse, Esther

    2014-01-01

    Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly) draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech. In the present study, 73 older adults (aged over 60 years) and 60 younger adults (aged between 18 and 30 years) performed a visual artificial grammar learning task and were presented with 60 meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory, and processing speed). Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly.

  4. Constraints on the Transfer of Perceptual Learning in Accented Speech

    PubMed Central

    Eisner, Frank; Melinger, Alissa; Weber, Andrea

    2013-01-01

    The perception of speech sounds can be re-tuned through a mechanism of lexically driven perceptual learning after exposure to instances of atypical speech production. This study asked whether this re-tuning is sensitive to the position of the atypical sound within the word. We investigated perceptual learning using English voiced stop consonants, which are commonly devoiced in word-final position by Dutch learners of English. After exposure to a Dutch learner’s productions of devoiced stops in word-final position (but not in any other positions), British English (BE) listeners showed evidence of perceptual learning in a subsequent cross-modal priming task, where auditory primes with devoiced final stops (e.g., “seed”, pronounced [si:th]), facilitated recognition of visual targets with voiced final stops (e.g., SEED). In Experiment 1, this learning effect generalized to test pairs where the critical contrast was in word-initial position, e.g., auditory primes such as “town” facilitated recognition of visual targets like DOWN. Control listeners, who had not heard any stops by the speaker during exposure, showed no learning effects. The generalization to word-initial position did not occur when participants had also heard correctly voiced, word-initial stops during exposure (Experiment 2), and when the speaker was a native BE speaker who mimicked the word-final devoicing (Experiment 3). The readiness of the perceptual system to generalize a previously learned adjustment to other positions within the word thus appears to be modulated by distributional properties of the speech input, as well as by the perceived sociophonetic characteristics of the speaker. The results suggest that the transfer of pre-lexical perceptual adjustments that occur through lexically driven learning can be affected by a combination of acoustic, phonological, and sociophonetic factors. PMID:23554598

  5. Investigation of nonlinear motion simulator washout schemes

    NASA Technical Reports Server (NTRS)

    Riedel, S. A.; Hofmann, L. G.

    1978-01-01

    An overview is presented of some of the promising washout schemes which have been devised. The four schemes presented fall into two basic configurations; crossfeed and crossproduct. Various nonlinear modifications further differentiate the four schemes. One nonlinear scheme is discussed in detail. This washout scheme takes advantage of subliminal motions to speed up simulator cab centering. It exploits so-called perceptual indifference thresholds to center the simulator cab at a faster rate whenever the input to the simulator is below the perceptual indifference level. The effect is to reduce the angular and translational simulation motion by comparison with that for the linear washout case. Finally, the conclusions and implications for further research in the area of nonlinear washout filters are presented.

  6. Lexical Processing in Toddlers with ASD: Does Weak Central Coherence Play a Role?

    PubMed

    Ellis Weismer, Susan; Haebig, Eileen; Edwards, Jan; Saffran, Jenny; Venker, Courtney E

    2016-12-01

    This study investigated whether vocabulary delays in toddlers with autism spectrum disorders (ASD) can be explained by a cognitive style that prioritizes processing of detailed, local features of input over global contextual integration-as claimed by the weak central coherence (WCC) theory. Thirty toddlers with ASD and 30 younger, cognition-matched typical controls participated in a looking-while-listening task that assessed whether perceptual or semantic similarities among named images disrupted word recognition relative to a neutral condition. Overlap of perceptual features invited local processing whereas semantic overlap invited global processing. With the possible exception of a subset of toddlers who had very low vocabulary skills, these results provide no evidence that WCC is characteristic of lexical processing in toddlers with ASD.

  7. Temporal characteristics of the influence of punishment on perceptual decision making in the human brain.

    PubMed

    Blank, Helen; Biele, Guido; Heekeren, Hauke R; Philiastides, Marios G

    2013-02-27

    Perceptual decision making is the process by which information from sensory systems is combined and used to influence our behavior. In addition to the sensory input, this process can be affected by other factors, such as reward and punishment for correct and incorrect responses. To investigate the temporal dynamics of how monetary punishment influences perceptual decision making in humans, we collected electroencephalography (EEG) data during a perceptual categorization task whereby the punishment level for incorrect responses was parametrically manipulated across blocks of trials. Behaviorally, we observed improved accuracy for high relative to low punishment levels. Using multivariate linear discriminant analysis of the EEG, we identified multiple punishment-induced discriminating components with spatially distinct scalp topographies. Compared with components related to sensory evidence, components discriminating punishment levels appeared later in the trial, suggesting that punishment affects primarily late postsensory, decision-related processing. Crucially, the amplitude of these punishment components across participants was predictive of the size of the behavioral improvements induced by punishment. Finally, trial-by-trial changes in prestimulus oscillatory activity in the alpha and gamma bands were good predictors of the amplitude of these components. We discuss these findings in the context of increased motivation/attention, resulting from increases in punishment, which in turn yields improved decision-related processing.

  8. Dorso-Lateral Frontal Cortex of the Ferret Encodes Perceptual Difficulty during Visual Discrimination

    PubMed Central

    Zhou, Zhe Charles; Yu, Chunxiu; Sellers, Kristin K.; Fröhlich, Flavio

    2016-01-01

    Visual discrimination requires sensory processing followed by a perceptual decision. Despite a growing understanding of visual areas in this behavior, it is unclear what role top-down signals from prefrontal cortex play, in particular as a function of perceptual difficulty. To address this gap, we investigated how neurons in dorso-lateral frontal cortex (dl-FC) of freely-moving ferrets encode task variables in a two-alternative forced choice visual discrimination task with high- and low-contrast visual input. About two-thirds of all recorded neurons in dl-FC were modulated by at least one of the two task variables, task difficulty and target location. More neurons in dl-FC preferred the hard trials; no such preference bias was found for target location. In individual neurons, this preference for specific task types was limited to brief epochs. Finally, optogenetic stimulation confirmed the functional role of the activity in dl-FC before target touch; suppression of activity in pyramidal neurons with the ArchT silencing opsin resulted in a decrease in reaction time to touch the target but not to retrieve reward. In conclusion, dl-FC activity is differentially recruited for high perceptual difficulty in the freely-moving ferret and the resulting signal may provide top-down behavioral inhibition. PMID:27025995

  9. Neural representation of form-contingent color filling-in in the early visual cortex.

    PubMed

    Hong, Sang Wook; Tong, Frank

    2017-11-01

    Perceptual filling-in exemplifies the constructive nature of visual processing. Color, a prominent surface property of visual objects, can appear to spread to neighboring areas that lack any color. We investigated cortical responses to a color filling-in illusion that effectively dissociates perceived color from the retinal input (van Lier, Vergeer, & Anstis, 2009). Observers adapted to a star-shaped stimulus with alternating red- and cyan-colored points to elicit a complementary afterimage. By presenting an achromatic outline that enclosed one of the two afterimage colors, perceptual filling-in of that color was induced in the unadapted central region. Visual cortical activity was monitored with fMRI, and analyzed using multivariate pattern analysis. Activity patterns in early visual areas (V1-V4) reliably distinguished between the two color-induced filled-in conditions, but only higher extrastriate visual areas showed the predicted correspondence with color perception. Activity patterns allowed for reliable generalization between filled-in colors and physical presentations of perceptually matched colors in areas V3 and V4, but not in earlier visual areas. These findings suggest that the perception of filled-in surface color likely requires more extensive processing by extrastriate visual areas, in order for the neural representation of surface color to become aligned with perceptually matched real colors.

  10. Dorso-Lateral Frontal Cortex of the Ferret Encodes Perceptual Difficulty during Visual Discrimination.

    PubMed

    Zhou, Zhe Charles; Yu, Chunxiu; Sellers, Kristin K; Fröhlich, Flavio

    2016-03-30

    Visual discrimination requires sensory processing followed by a perceptual decision. Despite a growing understanding of visual areas in this behavior, it is unclear what role top-down signals from prefrontal cortex play, in particular as a function of perceptual difficulty. To address this gap, we investigated how neurons in dorso-lateral frontal cortex (dl-FC) of freely-moving ferrets encode task variables in a two-alternative forced choice visual discrimination task with high- and low-contrast visual input. About two-thirds of all recorded neurons in dl-FC were modulated by at least one of the two task variables, task difficulty and target location. More neurons in dl-FC preferred the hard trials; no such preference bias was found for target location. In individual neurons, this preference for specific task types was limited to brief epochs. Finally, optogenetic stimulation confirmed the functional role of the activity in dl-FC before target touch; suppression of activity in pyramidal neurons with the ArchT silencing opsin resulted in a decrease in reaction time to touch the target but not to retrieve reward. In conclusion, dl-FC activity is differentially recruited for high perceptual difficulty in the freely-moving ferret and the resulting signal may provide top-down behavioral inhibition.

  11. Facial emotion recognition in paranoid schizophrenia and autism spectrum disorder.

    PubMed

    Sachse, Michael; Schlitt, Sabine; Hainz, Daniela; Ciaramidaro, Angela; Walter, Henrik; Poustka, Fritz; Bölte, Sven; Freitag, Christine M

    2014-11-01

    Schizophrenia (SZ) and autism spectrum disorder (ASD) share deficits in emotion processing. In order to identify convergent and divergent mechanisms, we investigated facial emotion recognition in SZ, high-functioning ASD (HFASD), and typically developed controls (TD). Different degrees of task difficulty and emotion complexity (face, eyes; basic emotions, complex emotions) were used. Two Benton tests were implemented in order to elicit potentially confounding visuo-perceptual functioning and facial processing. Nineteen participants with paranoid SZ, 22 with HFASD and 20 TD were included, aged between 14 and 33 years. Individuals with SZ were comparable to TD in all obtained emotion recognition measures, but showed reduced basic visuo-perceptual abilities. The HFASD group was impaired in the recognition of basic and complex emotions compared to both, SZ and TD. When facial identity recognition was adjusted for, group differences remained for the recognition of complex emotions only. Our results suggest that there is a SZ subgroup with predominantly paranoid symptoms that does not show problems in face processing and emotion recognition, but visuo-perceptual impairments. They also confirm the notion of a general facial and emotion recognition deficit in HFASD. No shared emotion recognition deficit was found for paranoid SZ and HFASD, emphasizing the differential cognitive underpinnings of both disorders. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Memory: enduring traces of perceptual and reflective attention.

    PubMed

    Chun, Marvin M; Johnson, Marcia K

    2011-11-17

    Attention and memory are typically studied as separate topics, but they are highly intertwined. Here we discuss the relation between memory and two fundamental types of attention: perceptual and reflective. Memory is the persisting consequence of cognitive activities initiated by and/or focused on external information from the environment (perceptual attention) and initiated by and/or focused on internal mental representations (reflective attention). We consider three key questions for advancing a cognitive neuroscience of attention and memory: to what extent do perception and reflection share representational areas? To what extent are the control processes that select, maintain, and manipulate perceptual and reflective information subserved by common areas and networks? During perception and reflection, to what extent are common areas responsible for binding features together to create complex, episodic memories and for reviving them later? Considering similarities and differences in perceptual and reflective attention helps integrate a broad range of findings and raises important unresolved issues. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Perceptual attributes for the comparison of head-related transfer functions.

    PubMed

    Simon, Laurent S R; Zacharov, Nick; Katz, Brian F G

    2016-11-01

    The benefit of using individual head-related transfer functions (HRTFs) in binaural audio is well documented with regards to improving localization precision. However, with the increased use of binaural audio in more complex scene renderings, cognitive studies, and virtual and augmented reality simulations, the perceptual impact of HRTF selection may go beyond simple localization. In this study, the authors develop a list of attributes which qualify the perceived differences between HRTFs, providing a qualitative understanding of the perceptual variance of non-individual binaural renderings. The list of attributes was designed using a Consensus Vocabulary Protocol elicitation method. Participants followed an Individual Vocabulary Protocol elicitation procedure, describing the perceived differences between binaural stimuli based on binauralized extracts of multichannel productions. This was followed by an automated lexical reduction and a series of consensus group meetings during which participants agreed on a list of relevant attributes. Finally, the proposed list of attributes was then evaluated through a listening test, leading to eight valid perceptual attributes for describing the perceptual dimensions affected by HRTF set variations.

  14. Perceptual suppression revealed by adaptive multi-scale entropy analysis of local field potential in monkey visual cortex.

    PubMed

    Hu, Meng; Liang, Hualou

    2013-04-01

    Generalized flash suppression (GFS), in which a salient visual stimulus can be rendered invisible despite continuous retinal input, provides a rare opportunity to directly study the neural mechanism of visual perception. Previous work based on linear methods, such as spectral analysis, on local field potential (LFP) during GFS has shown that the LFP power at distinctive frequency bands are differentially modulated by perceptual suppression. Yet, the linear method alone may be insufficient for the full assessment of neural dynamic due to the fundamentally nonlinear nature of neural signals. In this study, we set forth to analyze the LFP data collected from multiple visual areas in V1, V2 and V4 of macaque monkeys while performing the GFS task using a nonlinear method - adaptive multi-scale entropy (AME) - to reveal the neural dynamic of perceptual suppression. In addition, we propose a new cross-entropy measure at multiple scales, namely adaptive multi-scale cross-entropy (AMCE), to assess the nonlinear functional connectivity between two cortical areas. We show that: (1) multi-scale entropy exhibits percept-related changes in all three areas, with higher entropy observed during perceptual suppression; (2) the magnitude of the perception-related entropy changes increases systematically over successive hierarchical stages (i.e. from lower areas V1 to V2, up to higher area V4); and (3) cross-entropy between any two cortical areas reveals higher degree of asynchrony or dissimilarity during perceptual suppression, indicating a decreased functional connectivity between cortical areas. These results, taken together, suggest that perceptual suppression is related to a reduced functional connectivity and increased uncertainty of neural responses, and the modulation of perceptual suppression is more effective at higher visual cortical areas. AME is demonstrated to be a useful technique in revealing the underlying dynamic of nonlinear/nonstationary neural signal.

  15. Perceptual weighting of individual and concurrent cues for sentence intelligibility: Frequency, envelope, and fine structure

    PubMed Central

    Fogerty, Daniel

    2011-01-01

    The speech signal may be divided into frequency bands, each containing temporal properties of the envelope and fine structure. For maximal speech understanding, listeners must allocate their perceptual resources to the most informative acoustic properties. Understanding this perceptual weighting is essential for the design of assistive listening devices that need to preserve these important speech cues. This study measured the perceptual weighting of young normal-hearing listeners for the envelope and fine structure in each of three frequency bands for sentence materials. Perceptual weights were obtained under two listening contexts: (1) when each acoustic property was presented individually and (2) when multiple acoustic properties were available concurrently. The processing method was designed to vary the availability of each acoustic property independently by adding noise at different levels. Perceptual weights were determined by correlating a listener’s performance with the availability of each acoustic property on a trial-by-trial basis. Results demonstrated that weights were (1) equal when acoustic properties were presented individually and (2) biased toward envelope and mid-frequency information when multiple properties were available. Results suggest a complex interaction between the available acoustic properties and the listening context in determining how best to allocate perceptual resources when listening to speech in noise. PMID:21361454

  16. Top-down modulation of visual processing and knowledge after 250 ms supports object constancy of category decisions

    PubMed Central

    Schendan, Haline E.; Ganis, Giorgio

    2015-01-01

    People categorize objects more slowly when visual input is highly impoverished instead of optimal. While bottom-up models may explain a decision with optimal input, perceptual hypothesis testing (PHT) theories implicate top-down processes with impoverished input. Brain mechanisms and the time course of PHT are largely unknown. This event-related potential study used a neuroimaging paradigm that implicated prefrontal cortex in top-down modulation of occipitotemporal cortex. Subjects categorized more impoverished and less impoverished real and pseudo objects. PHT theories predict larger impoverishment effects for real than pseudo objects because top-down processes modulate knowledge only for real objects, but different PHT variants predict different timing. Consistent with parietal-prefrontal PHT variants, around 250 ms, the earliest impoverished real object interaction started on an N3 complex, which reflects interactive cortical activity for object cognition. N3 impoverishment effects localized to both prefrontal and occipitotemporal cortex for real objects only. The N3 also showed knowledge effects by 230 ms that localized to occipitotemporal cortex. Later effects reflected (a) word meaning in temporal cortex during the N400, (b) internal evaluation of prior decision and memory processes and secondary higher-order memory involving anterotemporal parts of a default mode network during posterior positivity (P600), and (c) response related activity in posterior cingulate during an anterior slow wave (SW) after 700 ms. Finally, response activity in supplementary motor area during a posterior SW after 900 ms showed impoverishment effects that correlated with RTs. Convergent evidence from studies of vision, memory, and mental imagery which reflects purely top-down inputs, indicates that the N3 reflects the critical top-down processes of PHT. A hybrid multiple-state interactive, PHT and decision theory best explains the visual constancy of object cognition. PMID:26441701

  17. Incorporating Auditory Models in Speech/Audio Applications

    NASA Astrophysics Data System (ADS)

    Krishnamoorthi, Harish

    2011-12-01

    Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.

  18. Feature maps driven no-reference image quality prediction of authentically distorted images

    NASA Astrophysics Data System (ADS)

    Ghadiyaram, Deepti; Bovik, Alan C.

    2015-03-01

    Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.

  19. Skilled deaf readers have an enhanced perceptual span in reading.

    PubMed

    Bélanger, Nathalie N; Slattery, Timothy J; Mayberry, Rachel I; Rayner, Keith

    2012-07-01

    Recent evidence suggests that, compared with hearing people, deaf people have enhanced visual attention to simple stimuli viewed in the parafovea and periphery. Although a large part of reading involves processing the fixated words in foveal vision, readers also utilize information in parafoveal vision to preprocess upcoming words and decide where to look next. In the study reported here, we investigated whether auditory deprivation affects low-level visual processing during reading by comparing the perceptual span of deaf signers who were skilled and less-skilled readers with the perceptual span of skilled hearing readers. Compared with hearing readers, the two groups of deaf readers had a larger perceptual span than would be expected given their reading ability. These results provide the first evidence that deaf readers' enhanced attentional allocation to the parafovea is used during complex cognitive tasks, such as reading.

  20. Assessing cognitive dysfunction in Parkinson's disease: An online tool to detect visuo‐perceptual deficits

    PubMed Central

    Schwarzkopf, Dietrich S.; Bahrami, Bahador; Fleming, Stephen M.; Jackson, Ben M.; Goch, Tristam J. C.; Saygin, Ayse P.; Miller, Luke E.; Pappa, Katerina; Pavisic, Ivanna; Schade, Rachel N.; Noyce, Alastair J.; Crutch, Sebastian J.; O'Keeffe, Aidan G.; Schrag, Anette E.; Morris, Huw R.

    2018-01-01

    ABSTRACT Background: People with Parkinson's disease (PD) who develop visuo‐perceptual deficits are at higher risk of dementia, but we lack tests that detect subtle visuo‐perceptual deficits and can be performed by untrained personnel. Hallucinations are associated with cognitive impairment and typically involve perception of complex objects. Changes in object perception may therefore be a sensitive marker of visuo‐perceptual deficits in PD. Objective: We developed an online platform to test visuo‐perceptual function. We hypothesised that (1) visuo‐perceptual deficits in PD could be detected using online tests, (2) object perception would be preferentially affected, and (3) these deficits would be caused by changes in perception rather than response bias. Methods: We assessed 91 people with PD and 275 controls. Performance was compared using classical frequentist statistics. We then fitted a hierarchical Bayesian signal detection theory model to a subset of tasks. Results: People with PD were worse than controls at object recognition, showing no deficits in other visuo‐perceptual tests. Specifically, they were worse at identifying skewed images (P < .0001); at detecting hidden objects (P = .0039); at identifying objects in peripheral vision (P < .0001); and at detecting biological motion (P = .0065). In contrast, people with PD were not worse at mental rotation or subjective size perception. Using signal detection modelling, we found this effect was driven by change in perceptual sensitivity rather than response bias. Conclusions: Online tests can detect visuo‐perceptual deficits in people with PD, with object recognition particularly affected. Ultimately, visuo‐perceptual tests may be developed to identify at‐risk patients for clinical trials to slow PD dementia. © 2018 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society. PMID:29473691

  1. A Structural Theory of Pitch1,2,3

    PubMed Central

    Laudanski, Jonathan; Zheng, Yi

    2014-01-01

    Abstract Musical notes can be ordered from low to high along a perceptual dimension called “pitch”. A characteristic property of these sounds is their periodic waveform, and periodicity generally correlates with pitch. Thus, pitch is often described as the perceptual correlate of the periodicity of the sound’s waveform. However, the existence and salience of pitch also depends in a complex way on other factors, in particular harmonic content. For example, periodic sounds made of high-order harmonics tend to have a weaker pitch than those made of low-order harmonics. Here we examine the theoretical proposition that pitch is the perceptual correlate of the regularity structure of the vibration pattern of the basilar membrane, across place and time—a generalization of the traditional view on pitch. While this proposition also attributes pitch to periodic sounds, we show that it predicts differences between resolved and unresolved harmonic complexes and a complex domain of existence of pitch, in agreement with psychophysical experiments. We also present a possible neural mechanism for pitch estimation based on coincidence detection, which does not require long delays, in contrast with standard temporal models of pitch. PMID:26464959

  2. Long-Term Memories Bias Sensitivity and Target Selection in Complex Scenes

    PubMed Central

    Patai, Eva Zita; Doallo, Sonia; Nobre, Anna Christina

    2014-01-01

    In everyday situations we often rely on our memories to find what we are looking for in our cluttered environment. Recently, we developed a new experimental paradigm to investigate how long-term memory (LTM) can guide attention, and showed how the pre-exposure to a complex scene in which a target location had been learned facilitated the detection of the transient appearance of the target at the remembered location (Summerfield, Lepsien, Gitelman, Mesulam, & Nobre, 2006; Summerfield, Rao, Garside, & Nobre, 2011). The present study extends these findings by investigating whether and how LTM can enhance perceptual sensitivity to identify targets occurring within their complex scene context. Behavioral measures showed superior perceptual sensitivity (d′) for targets located in remembered spatial contexts. We used the N2pc event-related potential to test whether LTM modulated the process of selecting the target from its scene context. Surprisingly, in contrast to effects of visual spatial cues or implicit contextual cueing, LTM for target locations significantly attenuated the N2pc potential. We propose that the mechanism by which these explicitly available LTMs facilitate perceptual identification of targets may differ from mechanisms triggered by other types of top-down sources of information. PMID:23016670

  3. Importance of perceptual representation in the visual control of action

    NASA Astrophysics Data System (ADS)

    Loomis, Jack M.; Beall, Andrew C.; Kelly, Jonathan W.; Macuga, Kristen L.

    2005-03-01

    In recent years, many experiments have demonstrated that optic flow is sufficient for visually controlled action, with the suggestion that perceptual representations of 3-D space are superfluous. In contrast, recent research in our lab indicates that some visually controlled actions, including some thought to be based on optic flow, are indeed mediated by perceptual representations. For example, we have demonstrated that people are able to perform complex spatial behaviors, like walking, driving, and object interception, in virtual environments which are rendered visible solely by cyclopean stimulation (random-dot cinematograms). In such situations, the absence of any retinal optic flow that is correlated with the objects and surfaces within the virtual environment means that people are using stereo-based perceptual representations to perform the behavior. The fact that people can perform such behaviors without training suggests that the perceptual representations are likely the same as those used when retinal optic flow is present. Other research indicates that optic flow, whether retinal or a more abstract property of the perceptual representation, is not the basis for postural control, because postural instability is related to perceived relative motion between self and the visual surroundings rather than to optic flow, even in the abstract sense.

  4. ViA: a perceptual visualization assistant

    NASA Astrophysics Data System (ADS)

    Healey, Chris G.; St. Amant, Robert; Elhaddad, Mahmoud S.

    2000-05-01

    This paper describes an automated visualized assistant called ViA. ViA is designed to help users construct perceptually optical visualizations to represent, explore, and analyze large, complex, multidimensional datasets. We have approached this problem by studying what is known about the control of human visual attention. By harnessing the low-level human visual system, we can support our dual goals of rapid and accurate visualization. Perceptual guidelines that we have built using psychophysical experiments form the basis for ViA. ViA uses modified mixed-initiative planning algorithms from artificial intelligence to search of perceptually optical data attribute to visual feature mappings. Our perceptual guidelines are integrated into evaluation engines that provide evaluation weights for a given data-feature mapping, and hints on how that mapping might be improved. ViA begins by asking users a set of simple questions about their dataset and the analysis tasks they want to perform. Answers to these questions are used in combination with the evaluation engines to identify and intelligently pursue promising data-feature mappings. The result is an automatically-generated set of mappings that are perceptually salient, but that also respect the context of the dataset and users' preferences about how they want to visualize their data.

  5. The dissociation of perception and cognition in children with early brain damage.

    PubMed

    Stiers, Peter; Vandenbussche, Erik

    2004-03-01

    Reduced non-verbal compared to verbal intelligence is used in many outcome studies of perinatal complications as an indication of visual perceptual impairment. To investigate whether this is justified, we re-examined data sets from two previous studies, both of which used the visual perceptual battery L94. The first study comprised 47 children at risk for cerebral visual impairment due to prematurity or birth asphyxia, who had been administered the McCarthy Scales of Children's abilities. The second study evaluated visual perceptual abilities in 82 children with a physical disability. These children's intellectual ability had been assessed with the Wechsler Intelligence Scale for Children-Revised and/or Wechsler Pre-school and Primary Scale of Intelligence-Revised. No significant association was found between visual perceptual impairment and (1) reduced non-verbal to verbal intelligence; (2) increased non-verbal subtest scatter; or (3) non-verbal subtest profile deviation, for any of the intelligence scales. This result suggests that non-verbal intelligence subtests assess a complex of cognitive skills that are distinct from visual perceptual abilities, and that this assessment is not hampered by deficits in perceptual abilities as manifested in these children.

  6. Brief Report: Simulations Suggest Heterogeneous Category Learning and Generalization in Children with Autism Is a Result of Idiosyncratic Perceptual Transformations

    ERIC Educational Resources Information Center

    Mercado, Eduardo, III; Church, Barbara A.

    2016-01-01

    Children with autism spectrum disorder (ASD) sometimes have difficulties learning categories. Past computational work suggests that such deficits may result from atypical representations in cortical maps. Here we use neural networks to show that idiosyncratic transformations of inputs can result in the formation of feature maps that impair…

  7. Perceptual Decoding Processes for Language in a Visual Mode and for Language in an Auditory Mode.

    ERIC Educational Resources Information Center

    Myerson, Rosemarie Farkas

    The purpose of this paper is to gain insight into the nature of the reading process through an understanding of the general nature of sensory processing mechanisms which reorganize and restructure input signals for central recognition, and an understanding of how the grammar of the language functions in defining the set of possible sentences in…

  8. Complex dynamics of semantic memory access in reading.

    PubMed

    Baggio, Giosué; Fonseca, André

    2012-02-07

    Understanding a word in context relies on a cascade of perceptual and conceptual processes, starting with modality-specific input decoding, and leading to the unification of the word's meaning into a discourse model. One critical cognitive event, turning a sensory stimulus into a meaningful linguistic sign, is the access of a semantic representation from memory. Little is known about the changes that activating a word's meaning brings about in cortical dynamics. We recorded the electroencephalogram (EEG) while participants read sentences that could contain a contextually unexpected word, such as 'cold' in 'In July it is very cold outside'. We reconstructed trajectories in phase space from single-trial EEG time series, and we applied three nonlinear measures of predictability and complexity to each side of the semantic access boundary, estimated as the onset time of the N400 effect evoked by critical words. Relative to controls, unexpected words were associated with larger prediction errors preceding the onset of the N400. Accessing the meaning of such words produced a phase transition to lower entropy states, in which cortical processing becomes more predictable and more regular. Our study sheds new light on the dynamics of information flow through interfaces between sensory and memory systems during language processing.

  9. Upgrading Gestalt psychology with variational neuroethology: The case of perceptual pleasures. Comment on "Answering Schrödinger's question: A free-energy formulation" by M.J. Desormeau Ramstead et al.

    NASA Astrophysics Data System (ADS)

    Van de Cruys, Sander

    2018-03-01

    Ramstead et al. provide a promising, encompassing framework for biology and psychology, based on the free energy principle (FEP) and Tinbergen's four questions [16]. Because their exposition remains at a fairly high level of abstraction, here we attempt to illustrate the potential of the framework through a concrete, classic case in psychology, namely that of our preference or liking of perceptual inputs. Two dominant but different views can be found in the literature. One harks back to the great Gestalt psychologists of the last century and stresses the salient and positive qualities of the 'goodness of form' or Prägnanz, i.e., orderly, balanced and coherent configuration [20,21]. Inputs that allow the formation of those "good Gestalts" would be most attractive. Later on, other authors added a role for learning (mere exposure) and argued that we prefer very familiar, regular or prototypical stimuli (e.g., [2]). However, these stimuli are quickly considered boring [3] and more importantly, highly attractive stimuli rarely conform to the principle (cf. art), partly discrediting the view.

  10. Visual-perceptual-kinesthetic inputs on influencing writing performances in children with handwriting difficulties.

    PubMed

    Tse, Linda F L; Thanapalan, Kannan C; Chan, Chetwyn C H

    2014-02-01

    This study investigated the role of visual-perceptual input in writing Chinese characters among senior school-aged children who had handwriting difficulties (CHD). The participants were 27 CHD (9-11 years old) and 61 normally developed control. There were three writing conditions: copying, and dictations with or without visual feedback. The motor-free subtests of the Developmental Test of Visual Perception (DTVP-2) were conducted. The CHD group showed significantly slower mean speeds of character production and less legibility of produced characters than the control group in all writing conditions (ps<0.001). There were significant deteriorations in legibility from copying to dictation without visual feedback. Nevertheless, the Group by Condition interaction effect was not statistically significant. Only position in space of DTVP-2 was significantly correlated with the legibility among CHD (r=-0.62, p=0.001). Poor legibility seems to be related to the less-intact spatial representation of the characters in working memory, which can be rectified by viewing the characters during writing. Visual feedback regarding one's own actions in writing can also improve legibility of characters among these children. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Rapid recalibration of speech perception after experiencing the McGurk illusion.

    PubMed

    Lüttke, Claudia S; Pérez-Bellido, Alexis; de Lange, Floris P

    2018-03-01

    The human brain can quickly adapt to changes in the environment. One example is phonetic recalibration: a speech sound is interpreted differently depending on the visual speech and this interpretation persists in the absence of visual information. Here, we examined the mechanisms of phonetic recalibration. Participants categorized the auditory syllables /aba/ and /ada/, which were sometimes preceded by the so-called McGurk stimuli (in which an /aba/ sound, due to visual /aga/ input, is often perceived as 'ada'). We found that only one trial of exposure to the McGurk illusion was sufficient to induce a recalibration effect, i.e. an auditory /aba/ stimulus was subsequently more often perceived as 'ada'. Furthermore, phonetic recalibration took place only when auditory and visual inputs were integrated to 'ada' (McGurk illusion). Moreover, this recalibration depended on the sensory similarity between the preceding and current auditory stimulus. Finally, signal detection theoretical analysis showed that McGurk-induced phonetic recalibration resulted in both a criterion shift towards /ada/ and a reduced sensitivity to distinguish between /aba/ and /ada/ sounds. The current study shows that phonetic recalibration is dependent on the perceptual integration of audiovisual information and leads to a perceptual shift in phoneme categorization.

  12. Serial dependence promotes object stability during occlusion

    PubMed Central

    Liberman, Alina; Zhang, Kathy; Whitney, David

    2016-01-01

    Object identities somehow appear stable and continuous over time despite eye movements, disruptions in visibility, and constantly changing visual input. Recent results have demonstrated that the perception of orientation, numerosity, and facial identity is systematically biased (i.e., pulled) toward visual input from the recent past. The spatial region over which current orientations or face identities are pulled by previous orientations or identities, respectively, is known as the continuity field, which is temporally tuned over the past several seconds (Fischer & Whitney, 2014). This perceptual pull could contribute to the visual stability of objects over short time periods, but does it also address how perceptual stability occurs during visual discontinuities? Here, we tested whether the continuity field helps maintain perceived object identity during occlusion. Specifically, we found that the perception of an oriented Gabor that emerged from behind an occluder was significantly pulled toward the random (and unrelated) orientation of the Gabor that was seen entering the occluder. Importantly, this serial dependence was stronger for predictable, continuously moving trajectories, compared to unpredictable ones or static displacements. This result suggests that our visual system takes advantage of expectations about a stable world, helping to maintain perceived object continuity despite interrupted visibility. PMID:28006066

  13. Modeling trial by trial and block feedback in perceptual learning

    PubMed Central

    Liu, Jiajuan; Dosher, Barbara; Lu, Zhong-Lin

    2014-01-01

    Feedback has been shown to play a complex role in visual perceptual learning. It is necessary for performance improvement in some conditions while not others. Different forms of feedback, such as trial-by-trial feedback or block feedback, may both facilitate learning, but with different mechanisms. False feedback can abolish learning. We account for all these results with the Augmented Hebbian Reweight Model (AHRM). Specifically, three major factors in the model advance performance improvement: the external trial-by-trial feedback when available, the self-generated output as an internal feedback when no external feedback is available, and the adaptive criterion control based on the block feedback. Through simulating a comprehensive feedback study (Herzog & Fahle 1997, Vision Research, 37 (15), 2133–2141), we show that the model predictions account for the pattern of learning in seven major feedback conditions. The AHRM can fully explain the complex empirical results on the role of feedback in visual perceptual learning. PMID:24423783

  14. Playing chess unconsciously.

    PubMed

    Kiesel, Andrea; Kunde, Wilfried; Pohl, Carsten; Berner, Michael P; Hoffmann, Joachim

    2009-01-01

    Expertise in a certain stimulus domain enhances perceptual capabilities. In the present article, the authors investigate whether expertise improves perceptual processing to an extent that allows complex visual stimuli to bias behavior unconsciously. Expert chess players judged whether a target chess configuration entailed a checking configuration. These displays were preceded by masked prime configurations that either represented a checking or a nonchecking configuration. Chess experts, but not novice chess players, revealed a subliminal response priming effect, that is, faster responding when prime and target displays were congruent (both checking or both nonchecking) rather than incongruent. Priming generalized to displays that were not used as targets, ruling out simple repetition priming effects. Thus, chess experts were able to judge unconsciously presented chess configurations as checking or nonchecking. A 2nd experiment demonstrated that experts' priming does not occur for simpler but uncommon chess configurations. The authors conclude that long-term practice prompts the acquisition of visual memories of chess configurations with integrated form-location conjunctions. These perceptual chunks enable complex visual processing outside of conscious awareness.

  15. Synthetic vision display evaluation studies

    NASA Technical Reports Server (NTRS)

    Regal, David M.; Whittington, David H.

    1994-01-01

    The goal of this research was to help us understand the display requirements for a synthetic vision system for the High Speed Civil Transport (HSCT). Four experiments were conducted to examine the effects of different levels of perceptual cue complexity in displays used by pilots in a flare and landing task. Increased levels of texture mapping of terrain and runway produced mixed results, including harder but shorter landings and a lower flare initiation altitude. Under higher workload conditions, increased texture resulted in an improvement in performance. An increase in familiar size cues did not result in improved performance. Only a small difference was found between displays using two patterns of high resolution texture mapping. The effects of increased perceptual cue complexity on performance was not as strong as would be predicted from the pilot's subjective reports or from related literature. A description of the role of a synthetic vision system in the High Speed Civil Transport is provide along with a literature review covering applied research related to perceptual cue usage in aircraft displays.

  16. Revisiting the empirical case against perceptual modularity

    PubMed Central

    Masrour, Farid; Nirshberg, Gregory; Schon, Michael; Leardi, Jason; Barrett, Emily

    2015-01-01

    Some theorists hold that the human perceptual system has a component that receives input only from units lower in the perceptual hierarchy. This thesis, that we shall here refer to as the encapsulation thesis, has been at the center of a continuing debate for the past few decades. Those who deny the encapsulation thesis often rely on the large body of psychological findings that allegedly suggest that perception is influenced by factors such as the beliefs, desires, goals, and the expectations of the perceiver. Proponents of the encapsulation thesis, however, often argue that, when correctly interpreted, these psychological findings are compatible with the thesis. In our view, the debate over the significance and the correct interpretation of these psychological findings has reached an impasse. We hold that this impasse is due to the methodological limitations over psychophysical experiments, and it is very unlikely that such experiments, on their own, could yield results that would settle the debate. After defending this claim, we argue that integrating data from cognitive neuroscience resolves the debate in favor of those who deny the encapsulation thesis. PMID:26583001

  17. Perceptual Learning and Feature-Based Approaches to Concepts – A Critical Discussion

    PubMed Central

    Stöckle-Schobel, Richard

    2012-01-01

    A central challenge for any theory of concept learning comes from Fodor’s argument against the learning of concepts, which lies at the basis of contemporary computationalist accounts of the mind. Robert Goldstone and his colleagues propose a theory of perceptual learning that attempts to overcome Fodor’s challenge. Its main component is the addition of a cognitive device at the interface of perception and conception, which slowly builds “cognitive symbols” out of perceptual stimuli. Two main mechanisms of concept creation are unitization and differentiation. In this paper, I will present and examine their theory, and will show that two problems hinder this reply to Fodor’s challenge from being a successful answer to the challenge. To amend the theory, I will argue that one would need to say more about the input systems to unitization and differentiation, and be clearer on the representational format that they are able to operate upon. Until these issues have been addressed, the proposal does not deploy its full potential to threaten a Fodorian position. PMID:22479256

  18. Characterising switching behaviour in perceptual multi-stability.

    PubMed

    Denham, Susan; Bendixen, Alexandra; Mill, Robert; Tóth, Dénes; Wennekers, Thomas; Coath, Martin; Bőhm, Tamás; Szalardy, Orsolya; Winkler, István

    2012-09-15

    When people experience an unchanging sensory input for a long period of time, their perception tends to switch stochastically and unavoidably between alternative interpretations of the sensation; a phenomenon known as perceptual bi-stability or multi-stability. The huge variability in the experimental data obtained in such paradigms makes it difficult to distinguish typical patterns of behaviour, or to identify differences between switching patterns. Here we propose a new approach to characterising switching behaviour based upon the extraction of transition matrices from the data, which provide a compact representation that is well-understood mathematically. On the basis of this representation we can characterise patterns of perceptual switching, visualise and simulate typical switching patterns, and calculate the likelihood of observing a particular switching pattern. The proposed method can support comparisons between different observers, experimental conditions and even experiments. We demonstrate the insights offered by this approach using examples from our experiments investigating multi-stability in auditory streaming. However, the methodology is generic and thus widely applicable in studies of multi-stability in any domain. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Exploring the potential of analysing visual search behaviour data using FROC (free-response receiver operating characteristic) method: an initial study

    NASA Astrophysics Data System (ADS)

    Dong, Leng; Chen, Yan; Dias, Sarah; Stone, William; Dias, Joseph; Rout, John; Gale, Alastair G.

    2017-03-01

    Visual search techniques and FROC analysis have been widely used in radiology to understand medical image perceptual behaviour and diagnostic performance. The potential of exploiting the advantages of both methodologies is of great interest to medical researchers. In this study, eye tracking data of eight dental practitioners was investigated. The visual search measures and their analyses are considered here. Each participant interpreted 20 dental radiographs which were chosen by an expert dental radiologist. Various eye movement measurements were obtained based on image area of interest (AOI) information. FROC analysis was then carried out by using these eye movement measurements as a direct input source. The performance of FROC methods using different input parameters was tested. The results showed that there were significant differences in FROC measures, based on eye movement data, between groups with different experience levels. Namely, the area under the curve (AUC) score evidenced higher values for experienced group for the measurements of fixation and dwell time. Also, positive correlations were found for AUC scores between the eye movement data conducted FROC and rating based FROC. FROC analysis using eye movement measurements as input variables can act as a potential performance indicator to deliver assessment in medical imaging interpretation and assess training procedures. Visual search data analyses lead to new ways of combining eye movement data and FROC methods to provide an alternative dimension to assess performance and visual search behaviour in the area of medical imaging perceptual tasks.

  20. Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity

    PubMed Central

    Gibney, Kyla D.; Aligbe, Enimielen; Eggleston, Brady A.; Nunes, Sarah R.; Kerkhoff, Willa G.; Dean, Cassandra L.; Kwakye, Leslie D.

    2017-01-01

    The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller’s inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information. PMID:28163675

  1. Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity.

    PubMed

    Gibney, Kyla D; Aligbe, Enimielen; Eggleston, Brady A; Nunes, Sarah R; Kerkhoff, Willa G; Dean, Cassandra L; Kwakye, Leslie D

    2017-01-01

    The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller's inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information.

  2. Emotional Picture and Word Processing: An fMRI Study on Effects of Stimulus Complexity

    PubMed Central

    Schlochtermeier, Lorna H.; Kuchinke, Lars; Pehrs, Corinna; Urton, Karolina; Kappelhoff, Hermann; Jacobs, Arthur M.

    2013-01-01

    Neuroscientific investigations regarding aspects of emotional experiences usually focus on one stimulus modality (e.g., pictorial or verbal). Similarities and differences in the processing between the different modalities have rarely been studied directly. The comparison of verbal and pictorial emotional stimuli often reveals a processing advantage of emotional pictures in terms of larger or more pronounced emotion effects evoked by pictorial stimuli. In this study, we examined whether this picture advantage refers to general processing differences or whether it might partly be attributed to differences in visual complexity between pictures and words. We first developed a new stimulus database comprising valence and arousal ratings for more than 200 concrete objects representable in different modalities including different levels of complexity: words, phrases, pictograms, and photographs. Using fMRI we then studied the neural correlates of the processing of these emotional stimuli in a valence judgment task, in which the stimulus material was controlled for differences in emotional arousal. No superiority for the pictorial stimuli was found in terms of emotional information processing with differences between modalities being revealed mainly in perceptual processing regions. While visual complexity might partly account for previously found differences in emotional stimulus processing, the main existing processing differences are probably due to enhanced processing in modality specific perceptual regions. We would suggest that both pictures and words elicit emotional responses with no general superiority for either stimulus modality, while emotional responses to pictures are modulated by perceptual stimulus features, such as picture complexity. PMID:23409009

  3. Communication Analysis of Information Complexes.

    ERIC Educational Resources Information Center

    Malik, M. F.

    Communication analysis is a tool for perceptual assessment of existing or projected information complexes, i.e., an established reality perceived by one or many humans. An information complex could be of a physical nature, such as a building, landscape, city street; or of a pure informational nature, such as a film, television program,…

  4. Visual Complexity: A Review

    ERIC Educational Resources Information Center

    Donderi, Don C.

    2006-01-01

    The idea of visual complexity, the history of its measurement, and its implications for behavior are reviewed, starting with structuralism and Gestalt psychology at the beginning of the 20th century and ending with visual complexity theory, perceptual learning theory, and neural circuit theory at the beginning of the 21st. Evidence is drawn from…

  5. Lexical Processing in Toddlers with ASD: Does Weak Central Coherence Play a Role?

    PubMed Central

    Weismer, Susan Ellis; Haebig, Eileen; Edwards, Jan; Saffran, Jenny; Venker, Courtney E.

    2016-01-01

    This study investigated whether vocabulary delays in toddlers with autism spectrum disorders (ASD) can be explained by a cognitive style that prioritizes processing of detailed, local features of input over global contextual integration – as claimed by the weak central coherence (WCC) theory. Thirty toddlers with ASD and 30 younger, cognition-matched typical controls participated in a looking-while-listening task that assessed whether perceptual or semantic similarities among named images disrupted word recognition relative to a neutral condition. Overlap of perceptual features invited local processing whereas semantic overlap invited global processing. With the possible exception of a subset of toddlers who had very low vocabulary skills, these results provide no evidence that WCC is characteristic of lexical processing in toddlers with ASD. PMID:27696177

  6. Object perception is selectively slowed by a visually similar working memory load.

    PubMed

    Robinson, Alan; Manzi, Alberto; Triesch, Jochen

    2008-12-22

    The capacity of visual working memory has been extensively characterized, but little work has investigated how occupying visual memory influences other aspects of cognition and perception. Here we show a novel effect: maintaining an item in visual working memory slows processing of similar visual stimuli during the maintenance period. Subjects judged the gender of computer rendered faces or the naturalness of body postures while maintaining different visual memory loads. We found that when stimuli of the same class (faces or bodies) were maintained in memory, perceptual judgments were slowed. Interestingly, this is the opposite of what would be predicted from traditional priming. Our results suggest there is interference between visual working memory and perception, caused by visual similarity between new perceptual input and items already encoded in memory.

  7. Fornix and medial temporal lobe lesions lead to comparable deficits in complex visual perception.

    PubMed

    Lech, Robert K; Koch, Benno; Schwarz, Michael; Suchan, Boris

    2016-05-04

    Recent research dealing with the structures of the medial temporal lobe (MTL) has shifted away from exclusively investigating memory-related processes and has repeatedly incorporated the investigation of complex visual perception. Several studies have demonstrated that higher level visual tasks can recruit structures like the hippocampus and perirhinal cortex in order to successfully perform complex visual discriminations, leading to a perceptual-mnemonic or representational view of the medial temporal lobe. The current study employed a complex visual discrimination paradigm in two patients suffering from brain lesions with differing locations and origin. Both patients, one with extensive medial temporal lobe lesions (VG) and one with a small lesion of the anterior fornix (HJK), were impaired in complex discriminations while showing otherwise mostly intact cognitive functions. The current data confirmed previous results while also extending the perceptual-mnemonic theory of the MTL to the main output structure of the hippocampus, the fornix. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Temporally selective attention modulates early perceptual processing: event-related potential evidence.

    PubMed

    Sanders, Lisa D; Astheimer, Lori B

    2008-05-01

    Some of the most important information we encounter changes so rapidly that our perceptual systems cannot process all of it in detail. Spatially selective attention is critical for perception when more information than can be processed in detail is presented simultaneously at distinct locations. When presented with complex, rapidly changing information, listeners may need to selectively attend to specific times rather than to locations. We present evidence that listeners can direct selective attention to time points that differ by as little as 500 msec, and that doing so improves target detection, affects baseline neural activity preceding stimulus presentation, and modulates auditory evoked potentials at a perceptually early stage. These data demonstrate that attentional modulation of early perceptual processing is temporally precise and that listeners can flexibly allocate temporally selective attention over short intervals, making it a viable mechanism for preferentially processing the most relevant segments in rapidly changing streams.

  9. Uncovering beat deafness: detecting rhythm disorders with synchronized finger tapping and perceptual timing tasks.

    PubMed

    Dalla Bella, Simone; Sowiński, Jakub

    2015-03-16

    A set of behavioral tasks for assessing perceptual and sensorimotor timing abilities in the general population (i.e., non-musicians) is presented here with the goal of uncovering rhythm disorders, such as beat deafness. Beat deafness is characterized by poor performance in perceiving durations in auditory rhythmic patterns or poor synchronization of movement with auditory rhythms (e.g., with musical beats). These tasks include the synchronization of finger tapping to the beat of simple and complex auditory stimuli and the detection of rhythmic irregularities (anisochrony detection task) embedded in the same stimuli. These tests, which are easy to administer, include an assessment of both perceptual and sensorimotor timing abilities under different conditions (e.g., beat rates and types of auditory material) and are based on the same auditory stimuli, ranging from a simple metronome to a complex musical excerpt. The analysis of synchronized tapping data is performed with circular statistics, which provide reliable measures of synchronization accuracy (e.g., the difference between the timing of the taps and the timing of the pacing stimuli) and consistency. Circular statistics on tapping data are particularly well-suited for detecting individual differences in the general population. Synchronized tapping and anisochrony detection are sensitive measures for identifying profiles of rhythm disorders and have been used with success to uncover cases of poor synchronization with spared perceptual timing. This systematic assessment of perceptual and sensorimotor timing can be extended to populations of patients with brain damage, neurodegenerative diseases (e.g., Parkinson's disease), and developmental disorders (e.g., Attention Deficit Hyperactivity Disorder).

  10. Young Skilled Deaf Readers Have an Enhanced Perceptual Span in Reading.

    PubMed

    Bélanger, Nathalie N; Lee, Michelle; Schotter, Elizabeth R

    2017-04-27

    Recently, Bélanger, Slattery, Mayberry and Rayner (2012) showed, using the moving window paradigm, that profoundly deaf adults have a wider perceptual span during reading relative to hearing adults matched on reading level. This difference might be related to the fact that deaf adults allocate more visual attention to simple stimuli in the parafovea (Bavelier, Dye & Hauser, 2006). Importantly, this reorganization of visual attention in deaf individuals is already manifesting in deaf children (Dye, Hauser & Bavelier, 2009). This leads to questions about the time course of the emergence of an enhanced perceptual span (which is under attentional control; Rayner, 2014; Miellet, O'Donnell, & Sereno, 2009) in young deaf readers. The present research addressed this question by comparing the perceptual spans of young deaf readers (age 7-15) and young hearing children (age 7-15). Young deaf readers, like deaf adults, were found to have a wider perceptual span relative to their hearing peers matched on reading level, suggesting that strong and early reorganization of visual attention in deaf individuals goes beyond the processing of simple visual stimuli and emerges into more cognitively complex tasks, such as reading.

  11. The neural response in short-term visual recognition memory for perceptual conjunctions.

    PubMed

    Elliott, R; Dolan, R J

    1998-01-01

    Short-term visual memory has been widely studied in humans and animals using delayed matching paradigms. The present study used positron emission tomography (PET) to determine the neural substrates of delayed matching to sample for complex abstract patterns over a 5-s delay. More specifically, the study assessed any differential neural response associated with remembering individual perceptual properties (color only and shape only) compared to conjunction between these properties. Significant activations associated with short-term visual memory (all memory conditions compared to perceptuomotor control) were observed in extrastriate cortex, medial and lateral parietal cortex, anterior cingulate, inferior frontal gyrus, and the thalamus. Significant deactivations were observed throughout the temporal cortex. Although the requirement to remember color compared to shape was associated with subtly different patterns of blood flow, the requirement to remember perceptual conjunctions between these features was not associated with additional specific activations. These data suggest that visual memory over a delay of the order of 5 s is mainly dependent on posterior perceptual regions of the cortex, with the exact regions depending on the perceptual aspect of the stimuli to be remembered.

  12. The perceptual chunking of speech: a demonstration using ERPs.

    PubMed

    Gilbert, Annie C; Boucher, Victor J; Jemel, Boutheina

    2015-04-07

    In tasks involving the learning of verbal or non-verbal sequences, groupings are spontaneously produced. These groupings are generally marked by a lengthening of final elements and have been attributed to a domain-general perceptual chunking linked to working memory. Yet, no study has shown how this domain-general chunking applies to speech processing, partly because of the traditional view that chunking involves a conceptual recoding of meaningful verbal items like words (Miller, 1956). The present study provides a demonstration of the perceptual chunking of speech by way of two experiments using evoked Positive Shifts (PSs), which capture on-line neural responses to marks of various groups. We observed listeners׳ response to utterances (Experiment 1) and meaningless series of syllables (Experiment 2) containing changing intonation and temporal marks, while also examining how these marks affect the recognition of heard items. The results show that, across conditions - and irrespective of the presence of meaningful items - PSs are specifically evoked by groups marked by lengthening. Moreover, this on-line detection of marks corresponds to characteristic grouping effects on listeners' immediate recognition of heard items, which suggests chunking effects linked to working memory. These findings bear out a perceptual chunking of speech input in terms of groups marked by lengthening, which constitute the defining marks of a domain-general chunking. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. The Neuroanatomical Correlates of Training-Related Perceptuo-Reflex Uncoupling in Dancers

    PubMed Central

    Nigmatullina, Yuliya; Hellyer, Peter J.; Nachev, Parashkev; Sharp, David J.; Seemungal, Barry M.

    2015-01-01

    Sensory input evokes low-order reflexes and higher-order perceptual responses. Vestibular stimulation elicits vestibular-ocular reflex (VOR) and self-motion perception (e.g., vertigo) whose response durations are normally equal. Adaptation to repeated whole-body rotations, for example, ballet training, is known to reduce vestibular responses. We investigated the neuroanatomical correlates of vestibular perceptuo-reflex adaptation in ballet dancers and controls. Dancers' vestibular-reflex and perceptual responses to whole-body yaw-plane step rotations were: (1) Briefer and (2) uncorrelated (controls' reflex and perception were correlated). Voxel-based morphometry showed a selective gray matter (GM) reduction in dancers' vestibular cerebellum correlating with ballet experience. Dancers' vestibular cerebellar GM density reduction was related to shorter perceptual responses (i.e. positively correlated) but longer VOR duration (negatively correlated). Contrastingly, controls' vestibular cerebellar GM density negatively correlated with perception and VOR. Diffusion-tensor imaging showed that cerebral cortex white matter (WM) microstructure correlated with vestibular perception but only in controls. In summary, dancers display vestibular perceptuo-reflex dissociation with the neuronatomical correlate localized to the vestibular cerebellum. Controls' robust vestibular perception correlated with a cortical WM network conspicuously absent in dancers. Since primary vestibular afferents synapse in the vestibular cerebellum, we speculate that a cerebellar gating of perceptual signals to cortical regions mediates the training-related attenuation of vestibular perception and perceptuo-reflex uncoupling. PMID:24072889

  14. Predicting perceptual learning from higher-order cortical processing.

    PubMed

    Wang, Fang; Huang, Jing; Lv, Yaping; Ma, Xiaoli; Yang, Bin; Wang, Encong; Du, Boqi; Li, Wu; Song, Yan

    2016-01-01

    Visual perceptual learning has been shown to be highly specific to the retinotopic location and attributes of the trained stimulus. Recent psychophysical studies suggest that these specificities, which have been associated with early retinotopic visual cortex, may in fact not be inherent in perceptual learning and could be related to higher-order brain functions. Here we provide direct electrophysiological evidence in support of this proposition. In a series of event-related potential (ERP) experiments, we recorded high-density electroencephalography (EEG) from human adults over the course of learning in a texture discrimination task (TDT). The results consistently showed that the earliest C1 component (68-84ms), known to reflect V1 activity driven by feedforward inputs, was not modulated by learning regardless of whether the behavioral improvement is location specific or not. In contrast, two later posterior ERP components (posterior P1 and P160-350) over the occipital cortex and one anterior ERP component (anterior P160-350) over the prefrontal cortex were progressively modified day by day. Moreover, the change of the anterior component was closely correlated with improved behavioral performance on a daily basis. Consistent with recent psychophysical and imaging observations, our results indicate that perceptual learning can mainly involve changes in higher-level visual cortex as well as in the neural networks responsible for cognitive functions such as attention and decision making. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Biologically-inspired robust and adaptive multi-sensor fusion and active control

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Dow, Paul A.; Huber, David J.

    2009-04-01

    In this paper, we describe a method and system for robust and efficient goal-oriented active control of a machine (e.g., robot) based on processing, hierarchical spatial understanding, representation and memory of multimodal sensory inputs. This work assumes that a high-level plan or goal is known a priori or is provided by an operator interface, which translates into an overall perceptual processing strategy for the machine. Its analogy to the human brain is the download of plans and decisions from the pre-frontal cortex into various perceptual working memories as a perceptual plan that then guides the sensory data collection and processing. For example, a goal might be to look for specific colored objects in a scene while also looking for specific sound sources. This paper combines three key ideas and methods into a single closed-loop active control system. (1) Use high-level plan or goal to determine and prioritize spatial locations or waypoints (targets) in multimodal sensory space; (2) collect/store information about these spatial locations at the appropriate hierarchy and representation in a spatial working memory. This includes invariant learning of these spatial representations and how to convert between them; and (3) execute actions based on ordered retrieval of these spatial locations from hierarchical spatial working memory and using the "right" level of representation that can efficiently translate into motor actions. In its most specific form, the active control is described for a vision system (such as a pantilt- zoom camera system mounted on a robotic head and neck unit) which finds and then fixates on high saliency visual objects. We also describe the approach where the goal is to turn towards and sequentially foveate on salient multimodal cues that include both visual and auditory inputs.

  16. Perceptual adaptation in the use of night vision goggles

    NASA Technical Reports Server (NTRS)

    Durgin, Frank H.; Proffitt, Dennis R.

    1992-01-01

    The image intensification (I sup 2) systems studied for this report were the biocular AN/PVS-7(NVG) and the binocular AN/AVS-6(ANVIS). Both are quite impressive for purposes of revealing the structure of the environment in a fairly straightforward way in extremely low-light conditions. But these systems represent an unusual viewing medium. The perceptual information available through I sup 2 systems is different in a variety of ways from the typical input of everyday vision, and extensive training and practice is required for optimal use. Using this sort of system involves a kind of perceptual skill learning, but is may also involve visual adaptations that are not simply an extension of normal vision. For example, the visual noise evident in the goggles in very low-light conditions results in unusual statistical properties in visual input. Because we had recently discovered a strong and enduring aftereffect of perceived texture density which seemed to be sensitive to precisely the sorts of statistical distortions introduced by I sup 2 systems, it occurred to use that visual noise of this sort might be a very adapting stimulus for texture density and produce an aftereffect that extended into normal vision once the goggles were removed. We have not found any experimental evidence that I sup 2 systems produce texture density aftereffects. The nature of the texture density aftereffect is briefly explained, followed by an accounting of our studies of I sup 2 systems and our most recent work on the texture density aftereffect. A test for spatial frequency adaptation after exposure to NVG's is also reported, as is a study of perceived depth from motion (motion parallax) while wearing the biocular goggles. We conclude with a summary of our findings.

  17. The Influence of Visual Feedback and Register Changes on Sign Language Production: A Kinematic Study with Deaf Signers

    ERIC Educational Resources Information Center

    Emmorey, Karen; Gertsberg, Nelly; Korpics, Franco; Wright, Charles E.

    2009-01-01

    Speakers monitor their speech output by listening to their own voice. However, signers do not look directly at their hands and cannot see their own face. We investigated the importance of a visual perceptual loop for sign language monitoring by examining whether changes in visual input alter sign production. Deaf signers produced American Sign…

  18. Coding Systems and the Comprehension of Instructional Materials

    DTIC Science & Technology

    1976-04-30

    important issue for skilled performance in any cognitive task. Successful performance involves the development of schemata that enables the... aids and hinders the assimila- tion of new inputs. He has begun by collecting casef. in which outstand- ing and intelligent men have been trapped...information with little conscious involvement seems a requirement for a variety of perceptual skills . For example, highly skilled inspectors of

  19. Synesthetic experiences enhance unconscious learning.

    PubMed

    Rothen, Nicolas; Scott, Ryan B; Mealor, Andy D; Coolbear, Daniel J; Burckhardt, Vera; Ward, Jamie

    2013-01-01

    Synesthesia  is characterized  by consistent extra perceptual experiences in response to normal sensory input. Recent studies provide evidence for a specific profile of enhanced memory performance in synesthesia, but focus exclusively on explicit memory paradigms for which the learned content is consciously accessible. In this study, for the first time, we demonstrate with an implicit memory paradigm that synesthetic experiences also enhance memory performance relating to unconscious knowledge.

  20. Response latencies in auditory sentence comprehension: effects of linguistic versus perceptual challenge.

    PubMed

    Tun, Patricia A; Benichov, Jonathan; Wingfield, Arthur

    2010-09-01

    Older adults with good hearing and with mild-to-moderate hearing loss were tested for comprehension of spoken sentences that required perceptual effort (hearing speech at lower sound levels), and two degrees of cognitive load (sentences with simpler or more complex syntax). Although comprehension accuracy was equivalent for both participant groups and for young adults with good hearing, hearing loss was associated with longer response latencies to the correct comprehension judgments, especially for complex sentences heard at relatively low amplitudes. These findings demonstrate the need to take into account both sensory and cognitive demands of speech materials in older adults' language comprehension. (c) 2010 APA, all rights reserved.

  1. Automatically Characterizing Sensory-Motor Patterns Underlying Reach-to-Grasp Movements on a Physical Depth Inversion Illusion.

    PubMed

    Nguyen, Jillian; Majmudar, Ushma V; Ravaliya, Jay H; Papathomas, Thomas V; Torres, Elizabeth B

    2015-01-01

    Recently, movement variability has been of great interest to motor control physiologists as it constitutes a physical, quantifiable form of sensory feedback to aid in planning, updating, and executing complex actions. In marked contrast, the psychological and psychiatric arenas mainly rely on verbal descriptions and interpretations of behavior via observation. Consequently, a large gap exists between the body's manifestations of mental states and their descriptions, creating a disembodied approach in the psychological and neural sciences: contributions of the peripheral nervous system to central control, executive functions, and decision-making processes are poorly understood. How do we shift from a psychological, theorizing approach to characterize complex behaviors more objectively? We introduce a novel, objective, statistical framework, and visuomotor control paradigm to help characterize the stochastic signatures of minute fluctuations in overt movements during a visuomotor task. We also quantify a new class of covert movements that spontaneously occur without instruction. These are largely beneath awareness, but inevitably present in all behaviors. The inclusion of these motions in our analyses introduces a new paradigm in sensory-motor integration. As it turns out, these movements, often overlooked as motor noise, contain valuable information that contributes to the emergence of different kinesthetic percepts. We apply these new methods to help better understand perception-action loops. To investigate how perceptual inputs affect reach behavior, we use a depth inversion illusion (DII): the same physical stimulus produces two distinct depth percepts that are nearly orthogonal, enabling a robust comparison of competing percepts. We find that the moment-by-moment empirically estimated motor output variability can inform us of the participants' perceptual states, detecting physiologically relevant signals from the peripheral nervous system that reveal internal mental states evoked by the bi-stable illusion. Our work proposes a new statistical platform to objectively separate changes in visual perception by quantifying the unfolding of movement, emphasizing the importance of including in the motion analyses all overt and covert aspects of motor behavior.

  2. Perceptually aligning apical frequency regions leads to more binaural fusion of speech in a cochlear implant simulation.

    PubMed

    Staisloff, Hannah E; Lee, Daniel H; Aronoff, Justin M

    2016-07-01

    For bilateral cochlear implant users, the left and right arrays are typically not physically aligned, resulting in a degradation of binaural fusion, which can be detrimental to binaural abilities. Perceptually aligning the two arrays can be accomplished by disabling electrodes in one ear that do not have a perceptually corresponding electrode in the other side. However, disabling electrodes at the edges of the array will cause compression of the input frequency range into a smaller cochlear extent, which may result in reduced spectral resolution. An alternative approach to overcome this mismatch would be to only align one edge of the array. By aligning either only the apical or basal end of the arrays, fewer electrodes would be disabled, potentially causing less reduction in spectral resolution. The goal of this study was to determine the relative effect of aligning either the basal or apical end of the electrode with regards to binaural fusion. A vocoder was used to simulate cochlear implant listening conditions in normal hearing listeners. Speech signals were vocoded such that the two ears were either predominantly aligned at only the basal or apical end of the simulated arrays. The experiment was then repeated with a spectrally inverted vocoder to determine whether the detrimental effects on fusion were related to the spectral-temporal characteristics of the stimuli or the location in the cochlea where the misalignment occurred. In Experiment 1, aligning the basal portion of the simulated arrays led to significantly less binaural fusion than aligning the apical portions of the simulated array. However, when the input was spectrally inverted, aligning the apical portion of the simulated array led to significantly less binaural fusion than aligning the basal portions of the simulated arrays. These results suggest that, for speech, with its predominantly low frequency spectral-temporal modulations, it is more important to perceptually align the apical portion of the array to better preserve binaural fusion. By partially aligning these arrays, cochlear implant users could potentially increase their ability to fuse speech sounds presented to the two ears while maximizing spectral resolution. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Perceptual processing of natural scenes at rapid rates: Effects of complexity, content, and emotional arousal

    PubMed Central

    Bradley, Margaret M.; Lang, Peter J.

    2013-01-01

    During rapid serial visual presentation (RSVP), the perceptual system is confronted with a rapidly changing array of sensory information demanding resolution. At rapid rates of presentation, previous studies have found an early (e.g., 150–280 ms) negativity over occipital sensors that is enhanced when emotional, as compared with neutral, pictures are viewed, suggesting facilitated perception. In the present study, we explored how picture composition and the presence of people in the image affect perceptual processing of pictures of natural scenes. Using RSVP, pictures that differed in perceptual composition (figure–ground or scenes), content (presence of people or not), and emotional content (emotionally arousing or neutral) were presented in a continuous stream for 330 ms each with no intertrial interval. In both subject and picture analyses, all three variables affected the amplitude of occipital negativity, with the greatest enhancement for figure–ground compositions (as compared with scenes), irrespective of content and emotional arousal, supporting an interpretation that ease of perceptual processing is associated with enhanced occipital negativity. Viewing emotional pictures prompted enhanced negativity only for pictures that depicted people, suggesting that specific features of emotionally arousing images are associated with facilitated perceptual processing, rather than all emotional content. PMID:23780520

  4. When action is not enough: tool-use reveals tactile-dependent access to Body Schema.

    PubMed

    Cardinali, L; Brozzoli, C; Urquizar, C; Salemme, R; Roy, A C; Farnè, A

    2011-11-01

    Proper motor control of our own body implies a reliable representation of body parts. This information is supposed to be stored in the Body Schema (BS), a body representation that appears separate from a more perceptual body representation, the Body Image (BI). The dissociation between BS for action and BI for perception, originally based on neuropsychological evidence, has recently become the focus of behavioural studies in physiological conditions. By inducing the rubber hand illusion in healthy participants, Kammers et al. (2009) showed perceptual changes attributable to the BI to which the BS, as indexed via motor tasks, was immune. To more definitively support the existence of dissociable body representations in physiological conditions, here we tested for the opposite dissociation, namely, whether a tool-use paradigm would induce a functional update of the BS (via a motor localization task) without affecting the BI (via a perceptual localization task). Healthy subjects were required to localize three anatomical landmarks on their right arm, before and after using the same arm to control a tool. In addition to this classical task-dependency approach, we assessed whether preferential access to the BS could also depend upon the way positional information about forearm targets is provided, to subsequently execute the same task. To this aim, participants performed either verbally or tactually driven versions of the motor and perceptual localization tasks. Results showed that both the motor and perceptual tasks were sensitive to the update of the forearm representation, but only when the localization task (perceptual or motor) was driven by a tactile input. This pattern reveals that the motor output is not sufficient per se, but has to be coupled with tactually mediated information to guarantee access to the BS. These findings shade a new light on the action-perception models of body representations and underlie how functional plasticity may be a useful tool to clarify their operational definition. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. The role of alpha-rhythm states in perceptual learning: insights from experiments and computational models

    PubMed Central

    Sigala, Rodrigo; Haufe, Sebastian; Roy, Dipanjan; Dinse, Hubert R.; Ritter, Petra

    2014-01-01

    During the past two decades growing evidence indicates that brain oscillations in the alpha band (~10 Hz) not only reflect an “idle” state of cortical activity, but also take a more active role in the generation of complex cognitive functions. A recent study shows that more than 60% of the observed inter-subject variability in perceptual learning can be ascribed to ongoing alpha activity. This evidence indicates a significant role of alpha oscillations for perceptual learning and hence motivates to explore the potential underlying mechanisms. Hence, it is the purpose of this review to highlight existent evidence that ascribes intrinsic alpha oscillations a role in shaping our ability to learn. In the review, we disentangle the alpha rhythm into different neural signatures that control information processing within individual functional building blocks of perceptual learning. We further highlight computational studies that shed light on potential mechanisms regarding how alpha oscillations may modulate information transfer and connectivity changes relevant for learning. To enable testing of those model based hypotheses, we emphasize the need for multidisciplinary approaches combining assessment of behavior and multi-scale neuronal activity, active modulation of ongoing brain states and computational modeling to reveal the mathematical principles of the complex neuronal interactions. In particular we highlight the relevance of multi-scale modeling frameworks such as the one currently being developed by “The Virtual Brain” project. PMID:24772077

  6. Parameters of semantic multisensory integration depend on timing and modality order among people on the autism spectrum: evidence from event-related potentials.

    PubMed

    Russo, N; Mottron, L; Burack, J A; Jemel, B

    2012-07-01

    Individuals with autism spectrum disorders (ASD) report difficulty integrating simultaneously presented visual and auditory stimuli (Iarocci & McDonald, 2006), albeit showing enhanced perceptual processing of unisensory stimuli, as well as an enhanced role of perception in higher-order cognitive tasks (Enhanced Perceptual Functioning (EPF) model; Mottron, Dawson, Soulières, Hubert, & Burack, 2006). Individuals with an ASD also integrate auditory-visual inputs over longer periods of time than matched typically developing (TD) peers (Kwakye, Foss-Feig, Cascio, Stone & Wallace, 2011). To tease apart the dichotomy of both extended multisensory processing and enhanced perceptual processing, we used behavioral and electrophysiological measurements of audio-visual integration among persons with ASD. 13 TD and 14 autistics matched on IQ completed a forced choice multisensory semantic congruence task requiring speeded responses regarding the congruence or incongruence of animal sounds and pictures. Stimuli were presented simultaneously or sequentially at various stimulus onset asynchronies in both auditory first and visual first presentations. No group differences were noted in reaction time (RT) or accuracy. The latency at which congruent and incongruent waveforms diverged was the component of interest. In simultaneous presentations, congruent and incongruent waveforms diverged earlier (circa 150 ms) among persons with ASD than among TD individuals (around 350 ms). In sequential presentations, asymmetries in the timing of neuronal processing were noted in ASD which depended on stimulus order, but these were consistent with the nature of specific perceptual strengths in this group. These findings extend the Enhanced Perceptual Functioning Model to the multisensory domain, and provide a more nuanced context for interpreting ERP findings of impaired semantic processing in ASD. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Minimalist approach to perceptual interactions.

    PubMed

    Lenay, Charles; Stewart, John

    2012-01-01

    WORK AIMED AT STUDYING SOCIAL COGNITION IN AN INTERACTIONIST PERSPECTIVE OFTEN ENCOUNTERS SUBSTANTIAL THEORETICAL AND METHODOLOGICAL DIFFICULTIES: identifying the significant behavioral variables; recording them without disturbing the interaction; and distinguishing between: (a) the necessary and sufficient contributions of each individual partner for a collective dynamics to emerge; (b) features which derive from this collective dynamics and escape from the control of the individual partners; and (c) the phenomena arising from this collective dynamics which are subsequently appropriated and used by the partners. We propose a minimalist experimental paradigm as a basis for this conceptual discussion: by reducing the sensory inputs to a strict minimum, we force a spatial and temporal deployment of the perceptual activities, which makes it possible to obtain a complete recording and control of the dynamics of interaction. After presenting the principles of this minimalist approach to perception, we describe a series of experiments on two major questions in social cognition: recognizing the presence of another intentional subject; and phenomena of imitation. In both cases, we propose explanatory schema which render an interactionist approach to social cognition clear and explicit. Starting from our earlier work on perceptual crossing we present a new experiment on the mechanisms of reciprocal recognition of the perceptual intentionality of the other subject: the emergent collective dynamics of the perceptual crossing can be appropriated by each subject. We then present an experimental study of opaque imitation (when the subjects cannot see what they themselves are doing). This study makes it possible to characterize what a properly interactionist approach to imitation might be. In conclusion, we draw on these results, to show how an interactionist approach can contribute to a fully social approach to social cognition.

  8. The nature-disorder paradox: A perceptual study on how nature is disorderly yet aesthetically preferred.

    PubMed

    Kotabe, Hiroki P; Kardan, Omid; Berman, Marc G

    2017-08-01

    Natural environments have powerful aesthetic appeal linked to their capacity for psychological restoration. In contrast, disorderly environments are aesthetically aversive, and have various detrimental psychological effects. But in our research, we have repeatedly found that natural environments are perceptually disorderly. What could explain this paradox? We present 3 competing hypotheses: the aesthetic preference for naturalness is more powerful than the aesthetic aversion to disorder (the nature-trumps-disorder hypothesis ); disorder is trivial to aesthetic preference in natural contexts (the harmless-disorder hypothesis ); and disorder is aesthetically preferred in natural contexts (the beneficial-disorder hypothesis ). Utilizing novel methods of perceptual study and diverse stimuli, we rule in the nature-trumps-disorder hypothesis and rule out the harmless-disorder and beneficial-disorder hypotheses. In examining perceptual mechanisms, we find evidence that high-level scene semantics are both necessary and sufficient for the nature-trumps-disorder effect. Necessity is evidenced by the effect disappearing in experiments utilizing only low-level visual stimuli (i.e., where scene semantics have been removed) and experiments utilizing a rapid-scene-presentation procedure that obscures scene semantics. Sufficiency is evidenced by the effect reappearing in experiments utilizing noun stimuli which remove low-level visual features. Furthermore, we present evidence that the interaction of scene semantics with low-level visual features amplifies the nature-trumps-disorder effect-the effect is weaker both when statistically adjusting for quantified low-level visual features and when using noun stimuli which remove low-level visual features. These results have implications for psychological theories bearing on the joint influence of low- and high-level perceptual inputs on affect and cognition, as well as for aesthetic design. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Minimalist Approach to Perceptual Interactions

    PubMed Central

    Lenay, Charles; Stewart, John

    2012-01-01

    Work aimed at studying social cognition in an interactionist perspective often encounters substantial theoretical and methodological difficulties: identifying the significant behavioral variables; recording them without disturbing the interaction; and distinguishing between: (a) the necessary and sufficient contributions of each individual partner for a collective dynamics to emerge; (b) features which derive from this collective dynamics and escape from the control of the individual partners; and (c) the phenomena arising from this collective dynamics which are subsequently appropriated and used by the partners. We propose a minimalist experimental paradigm as a basis for this conceptual discussion: by reducing the sensory inputs to a strict minimum, we force a spatial and temporal deployment of the perceptual activities, which makes it possible to obtain a complete recording and control of the dynamics of interaction. After presenting the principles of this minimalist approach to perception, we describe a series of experiments on two major questions in social cognition: recognizing the presence of another intentional subject; and phenomena of imitation. In both cases, we propose explanatory schema which render an interactionist approach to social cognition clear and explicit. Starting from our earlier work on perceptual crossing we present a new experiment on the mechanisms of reciprocal recognition of the perceptual intentionality of the other subject: the emergent collective dynamics of the perceptual crossing can be appropriated by each subject. We then present an experimental study of opaque imitation (when the subjects cannot see what they themselves are doing). This study makes it possible to characterize what a properly interactionist approach to imitation might be. In conclusion, we draw on these results, to show how an interactionist approach can contribute to a fully social approach to social cognition. PMID:22582041

  10. Translating novel findings of perceptual-motor codes into the neuro-rehabilitation of movement disorders.

    PubMed

    Pazzaglia, Mariella; Galli, Giulia

    2015-01-01

    The bidirectional flow of perceptual and motor information has recently proven useful as rehabilitative tool for re-building motor memories. We analyzed how the visual-motor approach has been successfully applied in neurorehabilitation, leading to surprisingly rapid and effective improvements in action execution. We proposed that the contribution of multiple sensory channels during treatment enables individuals to predict and optimize motor behavior, having a greater effect than visual input alone. We explored how the state-of-the-art neuroscience techniques show direct evidence that employment of visual-motor approach leads to increased motor cortex excitability and synaptic and cortical map plasticity. This super-additive response to multimodal stimulation may maximize neural plasticity, potentiating the effect of conventional treatment, and will be a valuable approach when it comes to advances in innovative methodologies.

  11. Antecedent occipital alpha band activity predicts the impact of oculomotor events in perceptual switching

    PubMed Central

    Nakatani, Hironori; van Leeuwen, Cees

    2013-01-01

    Oculomotor events such as blinks and saccades transiently interrupt the visual input and, even though this mostly goes undetected, these brief interruptions could still influence the percept. In particular, both blinking and saccades facilitate switching in ambiguous figures such as the Necker cube. To investigate the neural state antecedent to these oculomotor events during the perception of an ambiguous figure, we measured the human scalp electroencephalogram (EEG). When blinking led to perceptual switching, antecedent occipital alpha band activity exhibited a transient increase in amplitude. When a saccade led to switching, a series of transient increases and decreases in amplitude was observed in the antecedent occipital alpha band activity. Our results suggest that the state of occipital alpha band activity predicts the impact of oculomotor events on the percept. PMID:23745106

  12. The grammatical morpheme deficit in moderate hearing impairment.

    PubMed

    McGuckian, Maria; Henry, Alison

    2007-03-01

    Much remains unknown about grammatical morpheme (GM) acquisition by children with moderate hearing impairment (HI) acquiring spoken English. To investigate how moderate HI impacts on the use of GMs in speech and to provide an explanation for the pattern of findings. Elicited and spontaneous speech data were collected from children with moderate HI (n = 10; mean age = 7;4 years) and a control group of typically developing children (n = 10; mean age = 3;2 years) with equivalent mean length of utterance (MLU). The data were analysed to determine the use of ten GMs of English. Comparisons were made between the groups for rates of correct GM production, for types and rates of GM errors, and for order of GM accuracy. The findings revealed significant differences between the HI group and the control group for correct production of five GMs. The differences were not all in the same direction. The HI group produced possessive -s and plural -s significantly less frequently than the controls (this is not simply explained by the perceptual saliency of -s) and produced progressive -ing, articles and irregular past tense significantly more frequently than the controls. Moreover, the order of GM accuracy for the HI group did not correlate with that observed for the control group. Various factors were analysed in an attempt to explain order of GM accuracy for the HI group (i.e. perceptual saliency, syntactic category, semantics and frequency of GMs in input). Frequency of GMs in input was the most successful explanation for the overall pattern of GM accuracy. Interestingly, the order of GM accuracy for the HI group (acquiring spoken English as a first language) was characteristic of that reported for individuals learning English as a second language. An explanation for the findings is drawn from a factor that connects these different groups of language learners, i.e. limited access to spoken English input. It is argued that, because of hearing factors, the children with HI are below a threshold for intake of spoken language input (a threshold easily reached by the controls). Thus, the children with HI are more input-dependent at the point in development studied and as such are more sensitive to input frequency effects. The findings suggest that optimizing or indeed increasing auditory input of GMs may have a positive impact on GM development for children with moderate HI.

  13. Neural Correlates of Auditory Perceptual Awareness and Release from Informational Masking Recorded Directly from Human Cortex: A Case Study.

    PubMed

    Dykstra, Andrew R; Halgren, Eric; Gutschalk, Alexander; Eskandar, Emad N; Cash, Sydney S

    2016-01-01

    In complex acoustic environments, even salient supra-threshold sounds sometimes go unperceived, a phenomenon known as informational masking. The neural basis of informational masking (and its release) has not been well-characterized, particularly outside auditory cortex. We combined electrocorticography in a neurosurgical patient undergoing invasive epilepsy monitoring with trial-by-trial perceptual reports of isochronous target-tone streams embedded in random multi-tone maskers. Awareness of such masker-embedded target streams was associated with a focal negativity between 100 and 200 ms and high-gamma activity (HGA) between 50 and 250 ms (both in auditory cortex on the posterolateral superior temporal gyrus) as well as a broad P3b-like potential (between ~300 and 600 ms) with generators in ventrolateral frontal and lateral temporal cortex. Unperceived target tones elicited drastically reduced versions of such responses, if at all. While it remains unclear whether these responses reflect conscious perception, itself, as opposed to pre- or post-perceptual processing, the results suggest that conscious perception of target sounds in complex listening environments may engage diverse neural mechanisms in distributed brain areas.

  14. Hierarchical representation of shapes in visual cortex—from localized features to figural shape segregation

    PubMed Central

    Tschechne, Stephan; Neumann, Heiko

    2014-01-01

    Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1–V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy. PMID:25157228

  15. Most superficial sublamina of rat superior colliculus: neuronal response properties and correlates with perceptual figure-ground segregation.

    PubMed

    Girman, S V; Lund, R D

    2007-07-01

    The uppermost layer (stratum griseum superficiale, SGS) of the superior colliculus (SC) provides an important gateway from the retina to the visual extrastriate and visuomotor systems. The majority of attention has been given to the role of this "visual" SC in saccade generation and target selection and it is generally considered to be less important in visual perception. We have found, however, that in the rat SGS1, the most superficial division of the SGS, the neurons perform very sophisticated analysis of visual information. First, in studying their responses with a variety of flashing stimuli we found that the neurons respond not to brightness changes per se, but to the appearance and/or disappearance of visual shapes in their receptive fields (RFs). Contrary to conventional RFs of neurons at the early stages of visual processing, the RFs in SGS1 cannot be described in terms of fixed spatial distribution of excitatory and inhibitory inputs. Second, SGS1 neurons showed robust orientation tuning to drifting gratings and orientation-specific modulation of the center response from surround. These are features previously seen only in visual cortical neurons and are considered to be involved in "contour" perception and figure-ground segregation. Third, responses of SGS1 neurons showed complex dynamics; typically the response tuning became progressively sharpened with repetitive grating periods. We conclude that SGS1 neurons are involved in considerably more complex analysis of retinal input than was previously thought. SGS1 may participate in early stages of figure-ground segregation and have a role in low-resolution nonconscious vision as encountered after visual decortication.

  16. Hierarchical representation of shapes in visual cortex-from localized features to figural shape segregation.

    PubMed

    Tschechne, Stephan; Neumann, Heiko

    2014-01-01

    Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.

  17. Urban Legends and Paranormal Beliefs: The Role of Reality Testing and Schizotypy

    PubMed Central

    Dagnall, Neil; Denovan, Andrew; Drinkwater, Kenneth; Parker, Andrew; Clough, Peter J.

    2017-01-01

    Recent research suggests that unconventional beliefs are locatable within a generic anomalous belief category. This notion derives from the observation that apparently dissimilar beliefs share fundamental, core characteristics (i.e., contradiction of orthodox scientific understanding of the universe and defiance of conventional understanding of reality). The present paper assessed the supposition that anomalous beliefs were conceptually similar and explicable via common psychological processes by comparing relationships between discrete beliefs [endorsement of urban legends (ULs) and belief in the paranormal] and cognitive-perceptual personality measures [proneness to reality testing (RT) and schizotypy]. A sample of 222 volunteers, recruited via convenience sampling, took part in the study. Participants completed a series of self-report measures (Urban Legends Questionnaire, Reality Testing subscale of the Inventory of Personality Organization, Revised Paranormal Belief Scale and the Schizotypal Personality Questionnaire Brief). Preliminary analysis revealed positive correlations between measures. Within schizotypy, the cognitive-perceptual factor was most strongly associated with anomalistic beliefs; disorganized and interpersonal produced only weak and negligible correlations respectively. Further investigation indicated complex relationships between RT, the cognitive-perceptual factor of schizotypy and anomalistic beliefs. Specifically, proneness to RT deficits explained a greater amount of variance in ULs, whilst schizotypy accounted for more variance in belief in the paranormal. Consideration of partial correlations supported these conclusions. The relationship between RT and ULs remained significant after controlling for the cognitive-perceptual factor. Contrastingly, the association between the cognitive-perceptual factor and ULs controlling for RT was non-significant. In the case of belief in the paranormal, controlling for proneness to RT reduced correlation size, but relationships remained significant. This study demonstrated that anomalistic beliefs vary in nature and composition. Findings indicated that generalized views of anomalistic beliefs provide only limited insight into the complex nature of belief. PMID:28642726

  18. Urban Legends and Paranormal Beliefs: The Role of Reality Testing and Schizotypy.

    PubMed

    Dagnall, Neil; Denovan, Andrew; Drinkwater, Kenneth; Parker, Andrew; Clough, Peter J

    2017-01-01

    Recent research suggests that unconventional beliefs are locatable within a generic anomalous belief category. This notion derives from the observation that apparently dissimilar beliefs share fundamental, core characteristics (i.e., contradiction of orthodox scientific understanding of the universe and defiance of conventional understanding of reality). The present paper assessed the supposition that anomalous beliefs were conceptually similar and explicable via common psychological processes by comparing relationships between discrete beliefs [endorsement of urban legends (ULs) and belief in the paranormal] and cognitive-perceptual personality measures [proneness to reality testing (RT) and schizotypy]. A sample of 222 volunteers, recruited via convenience sampling, took part in the study. Participants completed a series of self-report measures (Urban Legends Questionnaire, Reality Testing subscale of the Inventory of Personality Organization, Revised Paranormal Belief Scale and the Schizotypal Personality Questionnaire Brief). Preliminary analysis revealed positive correlations between measures. Within schizotypy, the cognitive-perceptual factor was most strongly associated with anomalistic beliefs; disorganized and interpersonal produced only weak and negligible correlations respectively. Further investigation indicated complex relationships between RT, the cognitive-perceptual factor of schizotypy and anomalistic beliefs. Specifically, proneness to RT deficits explained a greater amount of variance in ULs, whilst schizotypy accounted for more variance in belief in the paranormal. Consideration of partial correlations supported these conclusions. The relationship between RT and ULs remained significant after controlling for the cognitive-perceptual factor. Contrastingly, the association between the cognitive-perceptual factor and ULs controlling for RT was non-significant. In the case of belief in the paranormal, controlling for proneness to RT reduced correlation size, but relationships remained significant. This study demonstrated that anomalistic beliefs vary in nature and composition. Findings indicated that generalized views of anomalistic beliefs provide only limited insight into the complex nature of belief.

  19. The N2-P3 complex of the evoked potential and human performance

    NASA Technical Reports Server (NTRS)

    Odonnell, Brian F.; Cohen, Ronald A.

    1988-01-01

    The N2-P3 complex and other endogenous components of human evoked potential provide a set of tools for the investigation of human perceptual and cognitive processes. These multidimensional measures of central nervous system bioelectrical activity respond to a variety of environmental and internal factors which have been experimentally characterized. Their application to the analysis of human performance in naturalistic task environments is just beginning. Converging evidence suggests that the N2-P3 complex reflects processes of stimulus evaluation, perceptual resource allocation, and decision making that proceed in parallel, rather than in series, with response generation. Utilization of these EP components may provide insights into the central nervous system mechanisms modulating task performance unavailable from behavioral measures alone. The sensitivity of the N2-P3 complex to neuropathology, psychopathology, and pharmacological manipulation suggests that these components might provide sensitive markers for the effects of environmental stressors on the human central nervous system.

  20. Visual saliency-based fast intracoding algorithm for high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Zhou, Xin; Shi, Guangming; Zhou, Wei; Duan, Zhemin

    2017-01-01

    Intraprediction has been significantly improved in high efficiency video coding over H.264/AVC with quad-tree-based coding unit (CU) structure from size 64×64 to 8×8 and more prediction modes. However, these techniques cause a dramatic increase in computational complexity. An intracoding algorithm is proposed that consists of perceptual fast CU size decision algorithm and fast intraprediction mode decision algorithm. First, based on the visual saliency detection, an adaptive and fast CU size decision method is proposed to alleviate intraencoding complexity. Furthermore, a fast intraprediction mode decision algorithm with step halving rough mode decision method and early modes pruning algorithm is presented to selectively check the potential modes and effectively reduce the complexity of computation. Experimental results show that our proposed fast method reduces the computational complexity of the current HM to about 57% in encoding time with only 0.37% increases in BD rate. Meanwhile, the proposed fast algorithm has reasonable peak signal-to-noise ratio losses and nearly the same subjective perceptual quality.

  1. A hierarchy of time-scales and the brain.

    PubMed

    Kiebel, Stefan J; Daunizeau, Jean; Friston, Karl J

    2008-11-01

    In this paper, we suggest that cortical anatomy recapitulates the temporal hierarchy that is inherent in the dynamics of environmental states. Many aspects of brain function can be understood in terms of a hierarchy of temporal scales at which representations of the environment evolve. The lowest level of this hierarchy corresponds to fast fluctuations associated with sensory processing, whereas the highest levels encode slow contextual changes in the environment, under which faster representations unfold. First, we describe a mathematical model that exploits the temporal structure of fast sensory input to track the slower trajectories of their underlying causes. This model of sensory encoding or perceptual inference establishes a proof of concept that slowly changing neuronal states can encode the paths or trajectories of faster sensory states. We then review empirical evidence that suggests that a temporal hierarchy is recapitulated in the macroscopic organization of the cortex. This anatomic-temporal hierarchy provides a comprehensive framework for understanding cortical function: the specific time-scale that engages a cortical area can be inferred by its location along a rostro-caudal gradient, which reflects the anatomical distance from primary sensory areas. This is most evident in the prefrontal cortex, where complex functions can be explained as operations on representations of the environment that change slowly. The framework provides predictions about, and principled constraints on, cortical structure-function relationships, which can be tested by manipulating the time-scales of sensory input.

  2. Passive mapping and intermittent exploration for mobile robots

    NASA Technical Reports Server (NTRS)

    Engleson, Sean P.

    1994-01-01

    An adaptive state space architecture is combined with diktiometric representation to provide the framework for designing a robot mapping system with flexible navigation planning tasks. This involves indexing waypoints described as expectations, geometric indexing, and perceptual indexing. Matching and updating the robot's projected position and sensory inputs with indexing waypoints involves matchers, dynamic priorities, transients, and waypoint restructuring. The robot's map learning can be opganized around the principles of passive mapping.

  3. Rapid recalibration of speech perception after experiencing the McGurk illusion

    PubMed Central

    Pérez-Bellido, Alexis; de Lange, Floris P.

    2018-01-01

    The human brain can quickly adapt to changes in the environment. One example is phonetic recalibration: a speech sound is interpreted differently depending on the visual speech and this interpretation persists in the absence of visual information. Here, we examined the mechanisms of phonetic recalibration. Participants categorized the auditory syllables /aba/ and /ada/, which were sometimes preceded by the so-called McGurk stimuli (in which an /aba/ sound, due to visual /aga/ input, is often perceived as ‘ada’). We found that only one trial of exposure to the McGurk illusion was sufficient to induce a recalibration effect, i.e. an auditory /aba/ stimulus was subsequently more often perceived as ‘ada’. Furthermore, phonetic recalibration took place only when auditory and visual inputs were integrated to ‘ada’ (McGurk illusion). Moreover, this recalibration depended on the sensory similarity between the preceding and current auditory stimulus. Finally, signal detection theoretical analysis showed that McGurk-induced phonetic recalibration resulted in both a criterion shift towards /ada/ and a reduced sensitivity to distinguish between /aba/ and /ada/ sounds. The current study shows that phonetic recalibration is dependent on the perceptual integration of audiovisual information and leads to a perceptual shift in phoneme categorization. PMID:29657743

  4. Acquired word deafness, and the temporal grain of sound representation in the primary auditory cortex.

    PubMed

    Phillips, D P; Farmer, M E

    1990-11-15

    This paper explores the nature of the processing disorder which underlies the speech discrimination deficit in the syndrome of acquired word deafness following from pathology to the primary auditory cortex. A critical examination of the evidence on this disorder revealed the following. First, the most profound forms of the condition are expressed not only in an isolation of the cerebral linguistic processor from auditory input, but in a failure of even the perceptual elaboration of the relevant sounds. Second, in agreement with earlier studies, we conclude that the perceptual dimension disturbed in word deafness is a temporal one. We argue, however, that it is not a generalized disorder of auditory temporal processing, but one which is largely restricted to the processing of sounds with temporal content in the milliseconds to tens-of-milliseconds time frame. The perceptual elaboration of sounds with temporal content outside that range, in either direction, may survive the disorder. Third, we present neurophysiological evidence that the primary auditory cortex has a special role in the representation of auditory events in that time frame, but not in the representation of auditory events with temporal grains outside that range.

  5. Perceptual decision making: drift-diffusion model is equivalent to a Bayesian model

    PubMed Central

    Bitzer, Sebastian; Park, Hame; Blankenburg, Felix; Kiebel, Stefan J.

    2014-01-01

    Behavioral data obtained with perceptual decision making experiments are typically analyzed with the drift-diffusion model. This parsimonious model accumulates noisy pieces of evidence toward a decision bound to explain the accuracy and reaction times of subjects. Recently, Bayesian models have been proposed to explain how the brain extracts information from noisy input as typically presented in perceptual decision making tasks. It has long been known that the drift-diffusion model is tightly linked with such functional Bayesian models but the precise relationship of the two mechanisms was never made explicit. Using a Bayesian model, we derived the equations which relate parameter values between these models. In practice we show that this equivalence is useful when fitting multi-subject data. We further show that the Bayesian model suggests different decision variables which all predict equal responses and discuss how these may be discriminated based on neural correlates of accumulated evidence. In addition, we discuss extensions to the Bayesian model which would be difficult to derive for the drift-diffusion model. We suggest that these and other extensions may be highly useful for deriving new experiments which test novel hypotheses. PMID:24616689

  6. The brain dynamics of rapid perceptual adaptation to adverse listening conditions.

    PubMed

    Erb, Julia; Henry, Molly J; Eisner, Frank; Obleser, Jonas

    2013-06-26

    Listeners show a remarkable ability to quickly adjust to degraded speech input. Here, we aimed to identify the neural mechanisms of such short-term perceptual adaptation. In a sparse-sampling, cardiac-gated functional magnetic resonance imaging (fMRI) acquisition, human listeners heard and repeated back 4-band-vocoded sentences (in which the temporal envelope of the acoustic signal is preserved, while spectral information is highly degraded). Clear-speech trials were included as baseline. An additional fMRI experiment on amplitude modulation rate discrimination quantified the convergence of neural mechanisms that subserve coping with challenging listening conditions for speech and non-speech. First, the degraded speech task revealed an "executive" network (comprising the anterior insula and anterior cingulate cortex), parts of which were also activated in the non-speech discrimination task. Second, trial-by-trial fluctuations in successful comprehension of degraded speech drove hemodynamic signal change in classic "language" areas (bilateral temporal cortices). Third, as listeners perceptually adapted to degraded speech, downregulation in a cortico-striato-thalamo-cortical circuit was observable. The present data highlight differential upregulation and downregulation in auditory-language and executive networks, respectively, with important subcortical contributions when successfully adapting to a challenging listening situation.

  7. Seeing mathematics: perceptual experience and brain activity in acquired synesthesia.

    PubMed

    Brogaard, Berit; Vanni, Simo; Silvanto, Juha

    2013-01-01

    We studied the patient JP who has exceptional abilities to draw complex geometrical images by hand and a form of acquired synesthesia for mathematical formulas and objects, which he perceives as geometrical figures. JP sees all smooth curvatures as discrete lines, similarly regardless of scale. We carried out two preliminary investigations to establish the perceptual nature of synesthetic experience and to investigate the neural basis of this phenomenon. In a functional magnetic resonance imaging (fMRI) study, image-inducing formulas produced larger fMRI responses than non-image inducing formulas in the left temporal, parietal and frontal lobes. Thus our main finding is that the activation associated with his experience of complex geometrical images emerging from mathematical formulas is restricted to the left hemisphere.

  8. Perceptual Anomalies in Schizophrenia: Integrating Phenomenology and Cognitive Neuroscience

    PubMed Central

    Uhlhaas, Peter J.; Mishara, Aaron L.

    2007-01-01

    From phenomenological and experimental perspectives, research in schizophrenia has emphasized deficits in “higher” cognitive functions, including attention, executive function, as well as memory. In contrast, general consensus has viewed dysfunctions in basic perceptual processes to be relatively unimportant in the explanation of more complex aspects of the disorder, including changes in self-experience and the development of symptoms such as delusions. We present evidence from phenomenology and cognitive neuroscience that changes in the perceptual field in schizophrenia may represent a core impairment. After introducing the phenomenological approach to perception (Husserl, the Gestalt School), we discuss the views of Paul Matussek, Klaus Conrad, Ludwig Binswanger, and Wolfgang Blankenburg on perception in schizophrenia. These 4 psychiatrists describe changes in perception and automatic processes that are related to the altered experience of self. The altered self-experience, in turn, may be responsible for the emergence of delusions. The phenomenological data are compatible with current research that conceptualizes dysfunctions in perceptual processing as a deficit in the ability to combine stimulus elements into coherent object representations. Relationships of deficits in perceptual organization to cognitive and social dysfunction as well as the possible neurobiological mechanisms are discussed. PMID:17118973

  9. Limits on perceptual encoding can be predicted from known receptive field properties of human visual cortex.

    PubMed

    Cohen, Michael A; Rhee, Juliana Y; Alvarez, George A

    2016-01-01

    Human cognition has a limited capacity that is often attributed to the brain having finite cognitive resources, but the nature of these resources is usually not specified. Here, we show evidence that perceptual interference between items can be predicted by known receptive field properties of the visual cortex, suggesting that competition within representational maps is an important source of the capacity limitations of visual processing. Across the visual hierarchy, receptive fields get larger and represent more complex, high-level features. Thus, when presented simultaneously, high-level items (e.g., faces) will often land within the same receptive fields, while low-level items (e.g., color patches) will often not. Using a perceptual task, we found long-range interference between high-level items, but only short-range interference for low-level items, with both types of interference being weaker across hemifields. Finally, we show that long-range interference between items appears to occur primarily during perceptual encoding and not during working memory maintenance. These results are naturally explained by the distribution of receptive fields and establish a link between perceptual capacity limits and the underlying neural architecture. (c) 2015 APA, all rights reserved).

  10. Drawing from Memory: Hand-Eye Coordination at Multiple Scales

    PubMed Central

    Spivey, Michael J.

    2013-01-01

    Eyes move to gather visual information for the purpose of guiding behavior. This guidance takes the form of perceptual-motor interactions on short timescales for behaviors like locomotion and hand-eye coordination. More complex behaviors require perceptual-motor interactions on longer timescales mediated by memory, such as navigation, or designing and building artifacts. In the present study, the task of sketching images of natural scenes from memory was used to examine and compare perceptual-motor interactions on shorter and longer timescales. Eye and pen trajectories were found to be coordinated in time on shorter timescales during drawing, and also on longer timescales spanning study and drawing periods. The latter type of coordination was found by developing a purely spatial analysis that yielded measures of similarity between images, eye trajectories, and pen trajectories. These results challenge the notion that coordination only unfolds on short timescales. Rather, the task of drawing from memory evokes perceptual-motor encodings of visual images that preserve coarse-grained spatial information over relatively long timescales as well. PMID:23554894

  11. Illusions of integration are subjectively impenetrable: Phenomenological experience of Lag 1 percepts during dual-target RSVP.

    PubMed

    Simione, Luca; Akyürek, Elkan G; Vastola, Valentina; Raffone, Antonino; Bowman, Howard

    2017-05-01

    We investigated the relationship between different kinds of target reports in a rapid serial visual presentation task, and their associated perceptual experience. Participants reported the identity of two targets embedded in a stream of stimuli and their associated subjective visibility. In our task, target stimuli could be combined together to form more complex ones, thus allowing participants to report temporally integrated percepts. We found that integrated percepts were associated with high subjective visibility scores, whereas reports in which the order of targets was reversed led to a poorer perceptual experience. We also found a reciprocal relationship between the chance of the second target not being reported correctly and the perceptual experience associated with the first one. Principally, our results indicate that integrated percepts are experienced as a unique, clear perceptual event, whereas order reversals are experienced as confused, similar to cases in which an entirely wrong response was given. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Spatial Audio on the Web: Or Why Can't I hear Anything Over There?

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Schlickenmaier, Herbert (Technical Monitor); Johnson, Gerald (Technical Monitor); Frey, Mary Anne (Technical Monitor); Schneider, Victor S. (Technical Monitor); Ahunada, Albert J. (Technical Monitor)

    1997-01-01

    Auditory complexity, freedom of movement and interactivity is not always possible in a "true" virtual environment, much less in web-based audio. However, a lot of the perceptual and engineering constraints (and frustrations) that researchers, engineers and listeners have experienced in virtual audio are relevant to spatial audio on the web. My talk will discuss some of these engineering constraints and their perceptual consequences, and attempt to relate these issues to implementation on the web.

  13. Rehabilitation of Visual and Perceptual Dysfunction after Severe Traumatic Brain Injury

    DTIC Science & Technology

    2014-05-01

    Aguilar C, Hall-Haro C. Decay of prism aftereffects under passive and active conditions. Cogn Brain Res. 2004;20:92-97. 13. Kornheiser A. Adaptation...17. Huxlin KR, Martin T, Kelly K, et al. Perceptual relearning of complex visual motion after V1 damage in humans. J Neurosci . 2009;29:3981-3991...questionnaires. Restor Neurol Neurosci . 2004;22:399-420. 19. Peli E, Bowers AR, Mandel AJ, Higgins K, Goldstein RB, Bobrow L. Design of driving simulator

  14. Processing reafferent and exafferent visual information for action and perception.

    PubMed

    Reichenbach, Alexandra; Diedrichsen, Jörn

    2015-01-01

    A recent study suggests that reafferent hand-related visual information utilizes a privileged, attention-independent processing channel for motor control. This process was termed visuomotor binding to reflect its proposed function: linking visual reafferences to the corresponding motor control centers. Here, we ask whether the advantage of processing reafferent over exafferent visual information is a specific feature of the motor processing stream or whether the improved processing also benefits the perceptual processing stream. Human participants performed a bimanual reaching task in a cluttered visual display, and one of the visual hand cursors could be displaced laterally during the movement. We measured the rapid feedback responses of the motor system as well as matched perceptual judgments of which cursor was displaced. Perceptual judgments were either made by watching the visual scene without moving or made simultaneously to the reaching tasks, such that the perceptual processing stream could also profit from the specialized processing of reafferent information in the latter case. Our results demonstrate that perceptual judgments in the heavily cluttered visual environment were improved when performed based on reafferent information. Even in this case, however, the filtering capability of the perceptual processing stream suffered more from the increasing complexity of the visual scene than the motor processing stream. These findings suggest partly shared and partly segregated processing of reafferent information for vision for motor control versus vision for perception.

  15. Complex dynamics of semantic memory access in reading

    PubMed Central

    Baggio, Giosué; Fonseca, André

    2012-01-01

    Understanding a word in context relies on a cascade of perceptual and conceptual processes, starting with modality-specific input decoding, and leading to the unification of the word's meaning into a discourse model. One critical cognitive event, turning a sensory stimulus into a meaningful linguistic sign, is the access of a semantic representation from memory. Little is known about the changes that activating a word's meaning brings about in cortical dynamics. We recorded the electroencephalogram (EEG) while participants read sentences that could contain a contextually unexpected word, such as ‘cold’ in ‘In July it is very cold outside’. We reconstructed trajectories in phase space from single-trial EEG time series, and we applied three nonlinear measures of predictability and complexity to each side of the semantic access boundary, estimated as the onset time of the N400 effect evoked by critical words. Relative to controls, unexpected words were associated with larger prediction errors preceding the onset of the N400. Accessing the meaning of such words produced a phase transition to lower entropy states, in which cortical processing becomes more predictable and more regular. Our study sheds new light on the dynamics of information flow through interfaces between sensory and memory systems during language processing. PMID:21715401

  16. General recognition theory with individual differences: a new method for examining perceptual and decisional interactions with an application to face perception.

    PubMed

    Soto, Fabian A; Vucovich, Lauren; Musgrave, Robert; Ashby, F Gregory

    2015-02-01

    A common question in perceptual science is to what extent different stimulus dimensions are processed independently. General recognition theory (GRT) offers a formal framework via which different notions of independence can be defined and tested rigorously, while also dissociating perceptual from decisional factors. This article presents a new GRT model that overcomes several shortcomings with previous approaches, including a clearer separation between perceptual and decisional processes and a more complete description of such processes. The model assumes that different individuals share similar perceptual representations, but vary in their attention to dimensions and in the decisional strategies they use. We apply the model to the analysis of interactions between identity and emotional expression during face recognition. The results of previous research aimed at this problem have been disparate. Participants identified four faces, which resulted from the combination of two identities and two expressions. An analysis using the new GRT model showed a complex pattern of dimensional interactions. The perception of emotional expression was not affected by changes in identity, but the perception of identity was affected by changes in emotional expression. There were violations of decisional separability of expression from identity and of identity from expression, with the former being more consistent across participants than the latter. One explanation for the disparate results in the literature is that decisional strategies may have varied across studies and influenced the results of tests of perceptual interactions, as previous studies lacked the ability to dissociate between perceptual and decisional interactions.

  17. From sensation to perception: Using multivariate classification of visual illusions to identify neural correlates of conscious awareness in space and time.

    PubMed

    Hogendoorn, Hinze

    2015-01-01

    An important goal of cognitive neuroscience is understanding the neural underpinnings of conscious awareness. Although the low-level processing of sensory input is well understood in most modalities, it remains a challenge to understand how the brain translates such input into conscious awareness. Here, I argue that the application of multivariate pattern classification techniques to neuroimaging data acquired while observers experience perceptual illusions provides a unique way to dissociate sensory mechanisms from mechanisms underlying conscious awareness. Using this approach, it is possible to directly compare patterns of neural activity that correspond to the contents of awareness, independent from changes in sensory input, and to track these neural representations over time at high temporal resolution. I highlight five recent studies using this approach, and provide practical considerations and limitations for future implementations.

  18. Skilled Deaf Readers have an Enhanced Perceptual Span in Reading

    PubMed Central

    Bélanger, Nathalie N.; Slattery, Timothy J.; Mayberry, Rachel I.; Rayner, Keith

    2013-01-01

    Recent evidence suggests that deaf people have enhanced visual attention to simple stimuli in the parafovea in comparison to hearing people. Although a large part of reading involves processing the fixated words in foveal vision, readers also utilize information in parafoveal vision to pre-process upcoming words and decide where to look next. We investigated whether auditory deprivation affects low-level visual processing during reading, and compared the perceptual span of deaf signers who were skilled and less skilled readers to that of skilled hearing readers. Compared to hearing readers, deaf readers had a larger perceptual span than would be expected by their reading ability. These results provide the first evidence that deaf readers’ enhanced attentional allocation to the parafovea is used during a complex cognitive task such as reading. PMID:22683830

  19. Perceptual learning modules in mathematics: enhancing students' pattern recognition, structure extraction, and fluency.

    PubMed

    Kellman, Philip J; Massey, Christine M; Son, Ji Y

    2010-04-01

    Learning in educational settings emphasizes declarative and procedural knowledge. Studies of expertise, however, point to other crucial components of learning, especially improvements produced by experience in the extraction of information: perceptual learning (PL). We suggest that such improvements characterize both simple sensory and complex cognitive, even symbolic, tasks through common processes of discovery and selection. We apply these ideas in the form of perceptual learning modules (PLMs) to mathematics learning. We tested three PLMs, each emphasizing different aspects of complex task performance, in middle and high school mathematics. In the MultiRep PLM, practice in matching function information across multiple representations improved students' abilities to generate correct graphs and equations from word problems. In the Algebraic Transformations PLM, practice in seeing equation structure across transformations (but not solving equations) led to dramatic improvements in the speed of equation solving. In the Linear Measurement PLM, interactive trials involving extraction of information about units and lengths produced successful transfer to novel measurement problems and fraction problem solving. Taken together, these results suggest (a) that PL techniques have the potential to address crucial, neglected dimensions of learning, including discovery and fluent processing of relations; (b) PL effects apply even to complex tasks that involve symbolic processing; and (c) appropriately designed PL technology can produce rapid and enduring advances in learning. Copyright © 2009 Cognitive Science Society, Inc.

  20. Perceptual Fidelity vs. Engineering Compromises In Virtual Acoustic Displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Ahumada, Albert (Technical Monitor)

    1997-01-01

    Immersive, three-dimensional displays are increasingly becoming a goal of advanced human-machine interfaces. While the technology for achieving truly useful multisensory environments is still being developed, techniques for generating three-dimensional sound are now both sophisticated and practical enough to be applied to acoustic displays. The ultimate goal of virtual acoustics is to simulate the complex acoustic field experienced by a listener freely moving around within an environment. Of course, such complexity, freedom of movement and interactively is not always possible in a "true" virtual environment, much less in lower-fidelity multimedia systems. However, many of the perceptual and engineering constraints (and frustrations) that researchers, engineers and listeners have experienced in virtual audio are relevant to multimedia. In fact, some of the problems that have been studied will be even more of an issue for lower fidelity systems that are attempting to address the requirements of a huge, diverse and ultimately unknown audience. Examples include individual differences in head-related transfer functions, a lack of real interactively (head-tracking) in many multimedia displays, and perceptual degradation due to low sampling rates and/or low-bit compression. This paper discusses some of the engineering Constraints faced during implementation of virtual acoustic environments and the perceptual consequences of these constraints. Specific examples are given for NASA applications such as telerobotic control, aeronautical displays, and shuttle launch communications. An attempt will also be made to relate these issues to low-fidelity implementations such as the internet.

  1. Perceptual Fidelity Versus Engineering Compromises in Virtual Acoustic Displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Ellis, Stephen R. (Technical Monitor); Frey, Mary Anne (Technical Monitor); Schneider, Victor S. (Technical Monitor)

    1997-01-01

    Immersive, three-dimensional displays are increasingly becoming a goal of advanced human-machine interfaces. While the technology for achieving truly useful multisensory environments is still being developed, techniques for generating three-dimensional sound are now both sophisticated and practical enough to be applied to acoustic displays. The ultimate goal of virtual acoustics is to simulate the complex acoustic field experienced by a listener freely moving around within an environment. Of course, such complexity, freedom of movement and interactivity is not always possible in a 'true' virtual environment, much less in lower-fidelity multimedia systems. However, many of the perceptual and engineering constraints (and frustrations) that researchers, engineers and listeners have experienced in virtual audio are relevant to multimedia. In fact, some of the problems that have been studied will be even more of an issue for lower fidelity systems that are attempting to address the requirements of a huge, diverse and ultimately unknown audience. Examples include individual differences in head-related transfer functions, A lack of real interactively (head-tracking) in many multimedia displays, and perceptual degradation due to low sampling rates and/or low-bit compression. This paper discusses some of the engineering constraints faced during implementation of virtual acoustic environments and the perceptual consequences of these constraints. Specific examples are given for NASA applications such as telerobotic control, aeronautical displays, and shuttle launch communications. An attempt will also be made to relate these issues to low-fidelity implementations such as the internet.

  2. Understanding human perception by human-made illusions

    PubMed Central

    Carbon, Claus-Christian

    2014-01-01

    It may be fun to perceive illusions, but the understanding of how they work is even more stimulating and sustainable: They can tell us where the limits and capacity of our perceptual apparatus are found—they can specify how the constraints of perception are set. Furthermore, they let us analyze the cognitive sub-processes underlying our perception. Illusions in a scientific context are not mainly created to reveal the failures of our perception or the dysfunctions of our apparatus, but instead point to the specific power of human perception. The main task of human perception is to amplify and strengthen sensory inputs to be able to perceive, orientate and act very quickly, specifically and efficiently. The present paper strengthens this line of argument, strongly put forth by perceptual pioneer Richard L. Gregory (e.g., Gregory, 2009), by discussing specific visual illusions and how they can help us to understand the magic of perception. PMID:25132816

  3. Perceptual category learning and visual processing: An exercise in computational cognitive neuroscience.

    PubMed

    Cantwell, George; Riesenhuber, Maximilian; Roeder, Jessica L; Ashby, F Gregory

    2017-05-01

    The field of computational cognitive neuroscience (CCN) builds and tests neurobiologically detailed computational models that account for both behavioral and neuroscience data. This article leverages a key advantage of CCN-namely, that it should be possible to interface different CCN models in a plug-and-play fashion-to produce a new and biologically detailed model of perceptual category learning. The new model was created from two existing CCN models: the HMAX model of visual object processing and the COVIS model of category learning. Using bitmap images as inputs and by adjusting only a couple of learning-rate parameters, the new HMAX/COVIS model provides impressively good fits to human category-learning data from two qualitatively different experiments that used different types of category structures and different types of visual stimuli. Overall, the model provides a comprehensive neural and behavioral account of basal ganglia-mediated learning. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Attention flexibly trades off across points in time.

    PubMed

    Denison, Rachel N; Heeger, David J; Carrasco, Marisa

    2017-08-01

    Sensory signals continuously enter the brain, raising the question of how perceptual systems handle this constant flow of input. Attention to an anticipated point in time can prioritize visual information at that time. However, how we voluntarily attend across time when there are successive task-relevant stimuli has been barely investigated. We developed a novel experimental protocol that allowed us to assess, for the first time, both the benefits and costs of voluntary temporal attention when perceiving a short sequence of two or three visual targets with predictable timing. We found that when humans directed attention to a cued point in time, their ability to perceive orientation was better at that time but also worse earlier and later. These perceptual tradeoffs across time are analogous to those found across space for spatial attention. We concluded that voluntary attention is limited, and selective, across time.

  5. What constitutes an efficient reference frame for vision?

    PubMed Central

    Tadin, Duje; Lappin, Joseph S.; Blake, Randolph; Grossman, Emily D.

    2015-01-01

    Vision requires a reference frame. To what extent does this reference frame depend on the structure of the visual input, rather than just on retinal landmarks? This question is particularly relevant to the perception of dynamic scenes, when keeping track of external motion relative to the retina is difficult. We tested human subjects’ ability to discriminate the motion and temporal coherence of changing elements that were embedded in global patterns and whose perceptual organization was manipulated in a way that caused only minor changes to the retinal image. Coherence discriminations were always better when local elements were perceived to be organized as a global moving form than when they were perceived to be unorganized, individually moving entities. Our results indicate that perceived form influences the neural representation of its component features, and from this, we propose a new method for studying perceptual organization. PMID:12219092

  6. Neural field theory of perceptual echo and implications for estimating brain connectivity

    NASA Astrophysics Data System (ADS)

    Robinson, P. A.; Pagès, J. C.; Gabay, N. C.; Babaie, T.; Mukta, K. N.

    2018-04-01

    Neural field theory is used to predict and analyze the phenomenon of perceptual echo in which random input stimuli at one location are correlated with electroencephalographic responses at other locations. It is shown that this echo correlation (EC) yields an estimate of the transfer function from the stimulated point to other locations. Modal analysis then explains the observed spatiotemporal structure of visually driven EC and the dominance of the alpha frequency; two eigenmodes of similar amplitude dominate the response, leading to temporal beating and a line of low correlation that runs from the crown of the head toward the ears. These effects result from mode splitting and symmetry breaking caused by interhemispheric coupling and cortical folding. It is shown how eigenmodes obtained from functional magnetic resonance imaging experiments can be combined with temporal dynamics from EC or other evoked responses to estimate the spatiotemporal transfer function between any two points and hence their effective connectivity.

  7. A neural mechanism of dynamic gating of task-relevant information by top-down influence in primary visual cortex.

    PubMed

    Kamiyama, Akikazu; Fujita, Kazuhisa; Kashimori, Yoshiki

    2016-12-01

    Visual recognition involves bidirectional information flow, which consists of bottom-up information coding from retina and top-down information coding from higher visual areas. Recent studies have demonstrated the involvement of early visual areas such as primary visual area (V1) in recognition and memory formation. V1 neurons are not passive transformers of sensory inputs but work as adaptive processor, changing their function according to behavioral context. Top-down signals affect tuning property of V1 neurons and contribute to the gating of sensory information relevant to behavior. However, little is known about the neuronal mechanism underlying the gating of task-relevant information in V1. To address this issue, we focus on task-dependent tuning modulations of V1 neurons in two tasks of perceptual learning. We develop a model of the V1, which receives feedforward input from lateral geniculate nucleus and top-down input from a higher visual area. We show here that the change in a balance between excitation and inhibition in V1 connectivity is necessary for gating task-relevant information in V1. The balance change well accounts for the modulations of tuning characteristic and temporal properties of V1 neuronal responses. We also show that the balance change of V1 connectivity is shaped by top-down signals with temporal correlations reflecting the perceptual strategies of the two tasks. We propose a learning mechanism by which synaptic balance is modulated. To conclude, top-down signal changes the synaptic balance between excitation and inhibition in V1 connectivity, enabling early visual area such as V1 to gate context-dependent information under multiple task performances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Top-Down Modulation on Perceptual Decision with Balanced Inhibition through Feedforward and Feedback Inhibitory Neurons

    PubMed Central

    Wang, Cheng-Te; Lee, Chung-Ting; Wang, Xiao-Jing; Lo, Chung-Chuan

    2013-01-01

    Recent physiological studies have shown that neurons in various regions of the central nervous systems continuously receive noisy excitatory and inhibitory synaptic inputs in a balanced and covaried fashion. While this balanced synaptic input (BSI) is typically described in terms of maintaining the stability of neural circuits, a number of experimental and theoretical studies have suggested that BSI plays a proactive role in brain functions such as top-down modulation for executive control. Two issues have remained unclear in this picture. First, given the noisy nature of neuronal activities in neural circuits, how do the modulatory effects change if the top-down control implements BSI with different ratios between inhibition and excitation? Second, how is a top-down BSI realized via only excitatory long-range projections in the neocortex? To address the first issue, we systematically tested how the inhibition/excitation ratio affects the accuracy and reaction times of a spiking neural circuit model of perceptual decision. We defined an energy function to characterize the network dynamics, and found that different ratios modulate the energy function of the circuit differently and form two distinct functional modes. To address the second issue, we tested BSI with long-distance projection to inhibitory neurons that are either feedforward or feedback, depending on whether these inhibitory neurons do or do not receive inputs from local excitatory cells, respectively. We found that BSI occurs in both cases. Furthermore, when relying on feedback inhibitory neurons, through the recurrent interactions inside the circuit, BSI dynamically and automatically speeds up the decision by gradually reducing its inhibitory component in the course of a trial when a decision process takes too long. PMID:23626812

  9. Top-down modulation on perceptual decision with balanced inhibition through feedforward and feedback inhibitory neurons.

    PubMed

    Wang, Cheng-Te; Lee, Chung-Ting; Wang, Xiao-Jing; Lo, Chung-Chuan

    2013-01-01

    Recent physiological studies have shown that neurons in various regions of the central nervous systems continuously receive noisy excitatory and inhibitory synaptic inputs in a balanced and covaried fashion. While this balanced synaptic input (BSI) is typically described in terms of maintaining the stability of neural circuits, a number of experimental and theoretical studies have suggested that BSI plays a proactive role in brain functions such as top-down modulation for executive control. Two issues have remained unclear in this picture. First, given the noisy nature of neuronal activities in neural circuits, how do the modulatory effects change if the top-down control implements BSI with different ratios between inhibition and excitation? Second, how is a top-down BSI realized via only excitatory long-range projections in the neocortex? To address the first issue, we systematically tested how the inhibition/excitation ratio affects the accuracy and reaction times of a spiking neural circuit model of perceptual decision. We defined an energy function to characterize the network dynamics, and found that different ratios modulate the energy function of the circuit differently and form two distinct functional modes. To address the second issue, we tested BSI with long-distance projection to inhibitory neurons that are either feedforward or feedback, depending on whether these inhibitory neurons do or do not receive inputs from local excitatory cells, respectively. We found that BSI occurs in both cases. Furthermore, when relying on feedback inhibitory neurons, through the recurrent interactions inside the circuit, BSI dynamically and automatically speeds up the decision by gradually reducing its inhibitory component in the course of a trial when a decision process takes too long.

  10. Noisy Spiking in Visual Area V2 of Amblyopic Monkeys.

    PubMed

    Wang, Ye; Zhang, Bin; Tao, Xiaofeng; Wensveen, Janice M; Smith, Earl L; Chino, Yuzo M

    2017-01-25

    Interocular decorrelation of input signals in developing visual cortex can cause impaired binocular vision and amblyopia. Although increased intrinsic noise is thought to be responsible for a range of perceptual deficits in amblyopic humans, the neural basis for the elevated perceptual noise in amblyopic primates is not known. Here, we tested the idea that perceptual noise is linked to the neuronal spiking noise (variability) resulting from developmental alterations in cortical circuitry. To assess spiking noise, we analyzed the contrast-dependent dynamics of spike counts and spiking irregularity by calculating the square of the coefficient of variation in interspike intervals (CV 2 ) and the trial-to-trial fluctuations in spiking, or mean matched Fano factor (m-FF) in visual area V2 of monkeys reared with chronic monocular defocus. In amblyopic neurons, the contrast versus response functions and the spike count dynamics exhibited significant deviations from comparable data for normal monkeys. The CV 2 was pronounced in amblyopic neurons for high-contrast stimuli and the m-FF was abnormally high in amblyopic neurons for low-contrast gratings. The spike count, CV 2 , and m-FF of spontaneous activity were also elevated in amblyopic neurons. These contrast-dependent spiking irregularities were correlated with the level of binocular suppression in these V2 neurons and with the severity of perceptual loss for individual monkeys. Our results suggest that the developmental alterations in normalization mechanisms resulting from early binocular suppression can explain much of these contrast-dependent spiking abnormalities in V2 neurons and the perceptual performance of our amblyopic monkeys. Amblyopia is a common developmental vision disorder in humans. Despite the extensive animal studies on how amblyopia emerges, we know surprisingly little about the neural basis of amblyopia in humans and nonhuman primates. Although the vision of amblyopic humans is often described as being noisy by perceptual and modeling studies, the exact nature or origin of this elevated perceptual noise is not known. We show that elevated and noisy spontaneous activity and contrast-dependent noisy spiking (spiking irregularity and trial-to-trial fluctuations in spiking) in neurons of visual area V2 could limit the visual performance of amblyopic primates. Moreover, we discovered that the noisy spiking is linked to a high level of binocular suppression in visual cortex during development. Copyright © 2017 the authors 0270-6474/17/370922-14$15.00/0.

  11. The objects of visuospatial short-term memory: Perceptual organization and change detection.

    PubMed

    Nikolova, Atanaska; Macken, Bill

    2016-01-01

    We used a colour change-detection paradigm where participants were required to remember colours of six equally spaced circles. Items were superimposed on a background so as to perceptually group them within (a) an intact ring-shaped object, (b) a physically segmented but perceptually completed ring-shaped object, or (c) a corresponding background segmented into three arc-shaped objects. A nonpredictive cue at the location of one of the circles was followed by the memory items, which in turn were followed by a test display containing a probe indicating the circle to be judged same/different. Reaction times for correct responses revealed a same-object advantage; correct responses were faster to probes on the same object as the cue than to equidistant probes on a segmented object. This same-object advantage was identical for physically and perceptually completed objects, but was only evident in reaction times, and not in accuracy measures. Not only, therefore, is it important to consider object-level perceptual organization of stimulus elements when assessing the influence of a range of factors (e.g., number and complexity of elements) in visuospatial short-term memory, but a more detailed picture of the structure of information in memory may be revealed by measuring speed as well as accuracy.

  12. Efficient Coding and Statistically Optimal Weighting of Covariance among Acoustic Attributes in Novel Sounds

    PubMed Central

    Stilp, Christian E.; Kluender, Keith R.

    2012-01-01

    To the extent that sensorineural systems are efficient, redundancy should be extracted to optimize transmission of information, but perceptual evidence for this has been limited. Stilp and colleagues recently reported efficient coding of robust correlation (r = .97) among complex acoustic attributes (attack/decay, spectral shape) in novel sounds. Discrimination of sounds orthogonal to the correlation was initially inferior but later comparable to that of sounds obeying the correlation. These effects were attenuated for less-correlated stimuli (r = .54) for reasons that are unclear. Here, statistical properties of correlation among acoustic attributes essential for perceptual organization are investigated. Overall, simple strength of the principal correlation is inadequate to predict listener performance. Initial superiority of discrimination for statistically consistent sound pairs was relatively insensitive to decreased physical acoustic/psychoacoustic range of evidence supporting the correlation, and to more frequent presentations of the same orthogonal test pairs. However, increased range supporting an orthogonal dimension has substantial effects upon perceptual organization. Connectionist simulations and Eigenvalues from closed-form calculations of principal components analysis (PCA) reveal that perceptual organization is near-optimally weighted to shared versus unshared covariance in experienced sound distributions. Implications of reduced perceptual dimensionality for speech perception and plausible neural substrates are discussed. PMID:22292057

  13. Pre-stimulus EEG oscillations correlate with perceptual alternation of speech forms.

    PubMed

    Barraza, Paulo; Jaume-Guazzini, Francisco; Rodríguez, Eugenio

    2016-05-27

    Speech perception is often seen as a passive process guided by physical stimulus properties. However, ongoing brain dynamics could influence the subsequent perceptual organization of the speech, to an as yet unknown extent. To elucidate this issue, we analyzed EEG oscillatory activity before and immediately after the repetitive auditory presentation of words inducing the so-called verbal transformation effect (VTE), or spontaneous alternation of meanings due to its rapid repetition. Subjects indicated whether the meaning of the bistable word changed or not. For the Reversal more than for the Stable condition, results show a pre-stimulus local alpha desynchronization (300-50ms), followed by an early post-stimulus increase of local beta synchrony (0-80ms), and then a late increase and decrease of local alpha (200-340ms) and beta (360-440ms) synchrony respectively. Additionally, the ERPs showed that reversal positivity (RP) and reversal negativity components (RN), along with a late positivity complex (LPC) correlate with switching between verbal forms. Our results show how the ongoing dynamics brain is actively involved in the perceptual organization of the speech, destabilizing verbal perceptual states, and facilitating the perceptual regrouping of the elements composing the linguistic auditory stimulus. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Do you see what we see? The complex effects of perceptual distance between leaders and teams.

    PubMed

    Gibson, Cristina B; Cooper, Cecily D; Conger, Jay A

    2009-01-01

    Previous distance-related theories and concepts (e.g., social distance) have failed to address the sometimes wide disparity in perceptions between leaders and the teams they lead. Drawing from the extensive literature on teams, leadership, and cognitive models of social information processing, the authors develop the concept of leader-team perceptual distance, defined as differences between a leader and a team in perceptions of the same social stimulus. The authors investigate the effects of perceptual distance on team performance, operationalizing the construct with 3 distinct foci: goal accomplishment, constructive conflict, and decision-making autonomy. Analyzing leader, member, and customer survey responses for a large sample of teams, the authors demonstrate that perceptual distance between a leader and a team regarding goal accomplishment and constructive conflict have a nonlinear relationship with team performance. Greater perceptual differences are associated with decreases in team performance. Moreover, this effect is strongest when a team's perceptions are more positive than the leader's are (as opposed to the reverse). This pattern illustrates the pervasive effects that perceptions can have on team performance, highlighting the importance of developing awareness of perceptions in order to increase effectiveness. Implications for theory and practice are delineated. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  15. The objects of visuospatial short-term memory: Perceptual organization and change detection

    PubMed Central

    Nikolova, Atanaska; Macken, Bill

    2016-01-01

    We used a colour change-detection paradigm where participants were required to remember colours of six equally spaced circles. Items were superimposed on a background so as to perceptually group them within (a) an intact ring-shaped object, (b) a physically segmented but perceptually completed ring-shaped object, or (c) a corresponding background segmented into three arc-shaped objects. A nonpredictive cue at the location of one of the circles was followed by the memory items, which in turn were followed by a test display containing a probe indicating the circle to be judged same/different. Reaction times for correct responses revealed a same-object advantage; correct responses were faster to probes on the same object as the cue than to equidistant probes on a segmented object. This same-object advantage was identical for physically and perceptually completed objects, but was only evident in reaction times, and not in accuracy measures. Not only, therefore, is it important to consider object-level perceptual organization of stimulus elements when assessing the influence of a range of factors (e.g., number and complexity of elements) in visuospatial short-term memory, but a more detailed picture of the structure of information in memory may be revealed by measuring speed as well as accuracy. PMID:26286369

  16. Perceptual quality prediction on authentically distorted images using a bag of features approach

    PubMed Central

    Ghadiyaram, Deepti; Bovik, Alan C.

    2017-01-01

    Current top-performing blind perceptual image quality prediction models are generally trained on legacy databases of human quality opinion scores on synthetically distorted images. Therefore, they learn image features that effectively predict human visual quality judgments of inauthentic and usually isolated (single) distortions. However, real-world images usually contain complex composite mixtures of multiple distortions. We study the perceptually relevant natural scene statistics of such authentically distorted images in different color spaces and transform domains. We propose a “bag of feature maps” approach that avoids assumptions about the type of distortion(s) contained in an image and instead focuses on capturing consistencies—or departures therefrom—of the statistics of real-world images. Using a large database of authentically distorted images, human opinions of them, and bags of features computed on them, we train a regressor to conduct image quality prediction. We demonstrate the competence of the features toward improving automatic perceptual quality prediction by testing a learned algorithm using them on a benchmark legacy database as well as on a newly introduced distortion-realistic resource called the LIVE In the Wild Image Quality Challenge Database. We extensively evaluate the perceptual quality prediction model and algorithm and show that it is able to achieve good-quality prediction power that is better than other leading models. PMID:28129417

  17. The effect of perceptual load on tactile spatial attention: Evidence from event-related potentials.

    PubMed

    Gherri, Elena; Berreby, Fiona

    2017-10-15

    To investigate whether tactile spatial attention is modulated by perceptual load, behavioural and electrophysiological measures were recorded during two spatial cuing tasks in which the difficulty of the target/non-target discrimination was varied (High and Low load tasks). Moreover, to study whether attentional modulations by load are sensitive to the availability of visual information, the High and Low load tasks were carried out under both illuminated and darkness conditions. ERPs to cued and uncued non-targets were compared as a function of task (High vs. Low load) and illumination condition (Light vs. Darkness). Results revealed that the locus of tactile spatial attention was determined by a complex interaction between perceptual load and illumination conditions during sensory-specific stages of processing. In the Darkness, earlier effects of attention were present in the High load than in the Low load task, while no difference between tasks emerged in the Light. By contrast, increased load was associated with stronger attention effects during later post-perceptual processing stages regardless of illumination conditions. These findings demonstrate that ERP correlates of tactile spatial attention are strongly affected by the perceptual load of the target/non-target discrimination. However, differences between illumination conditions show that the impact of load on tactile attention depends on the presence of visual information. Perceptual load is one of the many factors that contribute to determine the effects of spatial selectivity in touch. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Subjective visual perception: from local processing to emergent phenomena of brain activity.

    PubMed

    Panagiotaropoulos, Theofanis I; Kapoor, Vishal; Logothetis, Nikos K

    2014-05-05

    The combination of electrophysiological recordings with ambiguous visual stimulation made possible the detection of neurons that represent the content of subjective visual perception and perceptual suppression in multiple cortical and subcortical brain regions. These neuronal populations, commonly referred to as the neural correlates of consciousness, are more likely to be found in the temporal and prefrontal cortices as well as the pulvinar, indicating that the content of perceptual awareness is represented with higher fidelity in higher-order association areas of the cortical and thalamic hierarchy, reflecting the outcome of competitive interactions between conflicting sensory information resolved in earlier stages. However, despite the significant insights into conscious perception gained through monitoring the activities of single neurons and small, local populations, the immense functional complexity of the brain arising from correlations in the activity of its constituent parts suggests that local, microscopic activity could only partially reveal the mechanisms involved in perceptual awareness. Rather, the dynamics of functional connectivity patterns on a mesoscopic and macroscopic level could be critical for conscious perception. Understanding these emergent spatio-temporal patterns could be informative not only for the stability of subjective perception but also for spontaneous perceptual transitions suggested to depend either on the dynamics of antagonistic ensembles or on global intrinsic activity fluctuations that may act upon explicit neural representations of sensory stimuli and induce perceptual reorganization. Here, we review the most recent results from local activity recordings and discuss the potential role of effective, correlated interactions during perceptual awareness.

  19. A unified account of perceptual layering and surface appearance in terms of gamut relativity.

    PubMed

    Vladusich, Tony; McDonnell, Mark D

    2014-01-01

    When we look at the world--or a graphical depiction of the world--we perceive surface materials (e.g. a ceramic black and white checkerboard) independently of variations in illumination (e.g. shading or shadow) and atmospheric media (e.g. clouds or smoke). Such percepts are partly based on the way physical surfaces and media reflect and transmit light and partly on the way the human visual system processes the complex patterns of light reaching the eye. One way to understand how these percepts arise is to assume that the visual system parses patterns of light into layered perceptual representations of surfaces, illumination and atmospheric media, one seen through another. Despite a great deal of previous experimental and modelling work on layered representation, however, a unified computational model of key perceptual demonstrations is still lacking. Here we present the first general computational model of perceptual layering and surface appearance--based on a boarder theoretical framework called gamut relativity--that is consistent with these demonstrations. The model (a) qualitatively explains striking effects of perceptual transparency, figure-ground separation and lightness, (b) quantitatively accounts for the role of stimulus- and task-driven constraints on perceptual matching performance, and (c) unifies two prominent theoretical frameworks for understanding surface appearance. The model thereby provides novel insights into the remarkable capacity of the human visual system to represent and identify surface materials, illumination and atmospheric media, which can be exploited in computer graphics applications.

  20. A Unified Account of Perceptual Layering and Surface Appearance in Terms of Gamut Relativity

    PubMed Central

    Vladusich, Tony; McDonnell, Mark D.

    2014-01-01

    When we look at the world—or a graphical depiction of the world—we perceive surface materials (e.g. a ceramic black and white checkerboard) independently of variations in illumination (e.g. shading or shadow) and atmospheric media (e.g. clouds or smoke). Such percepts are partly based on the way physical surfaces and media reflect and transmit light and partly on the way the human visual system processes the complex patterns of light reaching the eye. One way to understand how these percepts arise is to assume that the visual system parses patterns of light into layered perceptual representations of surfaces, illumination and atmospheric media, one seen through another. Despite a great deal of previous experimental and modelling work on layered representation, however, a unified computational model of key perceptual demonstrations is still lacking. Here we present the first general computational model of perceptual layering and surface appearance—based on a boarder theoretical framework called gamut relativity—that is consistent with these demonstrations. The model (a) qualitatively explains striking effects of perceptual transparency, figure-ground separation and lightness, (b) quantitatively accounts for the role of stimulus- and task-driven constraints on perceptual matching performance, and (c) unifies two prominent theoretical frameworks for understanding surface appearance. The model thereby provides novel insights into the remarkable capacity of the human visual system to represent and identify surface materials, illumination and atmospheric media, which can be exploited in computer graphics applications. PMID:25402466

  1. Subjective visual perception: from local processing to emergent phenomena of brain activity

    PubMed Central

    Panagiotaropoulos, Theofanis I.; Kapoor, Vishal; Logothetis, Nikos K.

    2014-01-01

    The combination of electrophysiological recordings with ambiguous visual stimulation made possible the detection of neurons that represent the content of subjective visual perception and perceptual suppression in multiple cortical and subcortical brain regions. These neuronal populations, commonly referred to as the neural correlates of consciousness, are more likely to be found in the temporal and prefrontal cortices as well as the pulvinar, indicating that the content of perceptual awareness is represented with higher fidelity in higher-order association areas of the cortical and thalamic hierarchy, reflecting the outcome of competitive interactions between conflicting sensory information resolved in earlier stages. However, despite the significant insights into conscious perception gained through monitoring the activities of single neurons and small, local populations, the immense functional complexity of the brain arising from correlations in the activity of its constituent parts suggests that local, microscopic activity could only partially reveal the mechanisms involved in perceptual awareness. Rather, the dynamics of functional connectivity patterns on a mesoscopic and macroscopic level could be critical for conscious perception. Understanding these emergent spatio-temporal patterns could be informative not only for the stability of subjective perception but also for spontaneous perceptual transitions suggested to depend either on the dynamics of antagonistic ensembles or on global intrinsic activity fluctuations that may act upon explicit neural representations of sensory stimuli and induce perceptual reorganization. Here, we review the most recent results from local activity recordings and discuss the potential role of effective, correlated interactions during perceptual awareness. PMID:24639588

  2. ON THE PERCEPTION OF PROBABLE THINGS

    PubMed Central

    Albright, Thomas D.

    2012-01-01

    SUMMARY Perception is influenced both by the immediate pattern of sensory inputs and by memories acquired through prior experiences with the world. Throughout much of its illustrious history, however, study of the cellular basis of perception has focused on neuronal structures and events that underlie the detection and discrimination of sensory stimuli. Relatively little attention has been paid to the means by which memories interact with incoming sensory signals. Building upon recent neurophysiological/behavioral studies of the cortical substrates of visual associative memory, I propose a specific functional process by which stored information about the world supplements sensory inputs to yield neuronal signals that can account for visual perceptual experience. This perspective represents a significant shift in the way we think about the cellular bases of perception. PMID:22542178

  3. Adaptive History Biases Result from Confidence-Weighted Accumulation of past Choices

    PubMed Central

    2018-01-01

    Perceptual decision-making is biased by previous events, including the history of preceding choices: observers tend to repeat (or alternate) their judgments of the sensory environment more often than expected by chance. Computational models postulate that these so-called choice history biases result from the accumulation of internal decision signals across trials. Here, we provide psychophysical evidence for such a mechanism and its adaptive utility. Male and female human observers performed different variants of a challenging visual motion discrimination task near psychophysical threshold. In a first experiment, we decoupled categorical perceptual choices and motor responses on a trial-by-trial basis. Choice history bias was explained by previous perceptual choices, not motor responses, highlighting the importance of internal decision signals in action-independent formats. In a second experiment, observers performed the task in stimulus environments containing different levels of autocorrelation and providing no external feedback about choice correctness. Despite performing under overall high levels of uncertainty, observers adjusted both the strength and the sign of their choice history biases to these environments. When stimulus sequences were dominated by either repetitions or alternations, the individual degree of this adjustment of history bias was about as good a predictor of individual performance as individual perceptual sensitivity. The history bias adjustment scaled with two proxies for observers' confidence about their previous choices (accuracy and reaction time). Together, our results are consistent with the idea that action-independent, confidence-modulated decision variables are accumulated across choices in a flexible manner that depends on decision-makers' model of their environment. SIGNIFICANCE STATEMENT Decisions based on sensory input are often influenced by the history of one's preceding choices, manifesting as a bias to systematically repeat (or alternate) choices. We here provide support for the idea that such choice history biases arise from the context-dependent accumulation of a quantity referred to as the decision variable: the variable's sign dictates the choice and its magnitude the confidence about choice correctness. We show that choices are accumulated in an action-independent format and a context-dependent manner, weighted by the confidence about their correctness. This confidence-weighted accumulation of choices enables decision-makers to flexibly adjust their behavior to different sensory environments. The bias adjustment can be as important for optimizing performance as one's sensitivity to the momentary sensory input. PMID:29371318

  4. Adaptive History Biases Result from Confidence-weighted Accumulation of Past Choices.

    PubMed

    Braun, Anke; Urai, Anne E; Donner, Tobias H

    2018-01-25

    Perceptual decision-making is biased by previous events, including the history of preceding choices: Observers tend to repeat (or alternate) their judgments of the sensory environment more often than expected by chance. Computational models postulate that these so-called choice history biases result from the accumulation of internal decision signals across trials. Here, we provide psychophysical evidence for such a mechanism and its adaptive utility. Male and female human observers performed different variants of a challenging visual motion discrimination task near psychophysical threshold. In a first experiment, we decoupled categorical perceptual choices and motor responses on a trial-by-trial basis. Choice history bias was explained by previous perceptual choices, not motor responses, highlighting the importance of internal decision signals in action-independent formats. In a second experiment, observers performed the task in stimulus environments containing different levels of auto-correlation and providing no external feedback about choice correctness. Despite performing under overall high levels of uncertainty, observers adjusted both the strength and the sign of their choice history biases to these environments. When stimulus sequences were dominated by either repetitions or alternations, the individual degree of this adjustment of history bias was about as good a predictor of individual performance as individual perceptual sensitivity. The history bias adjustment scaled with two proxies for observers' confidence about their previous choices (accuracy and reaction time). Taken together, our results are consistent with the idea that action-independent, confidence-modulated decision variables are accumulated across choices in a flexible manner that depends on decision-makers' model of their environment. Significance statement: Decisions based on sensory input are often influenced by the history of one's preceding choices, manifesting as a bias to systematically repeat (or alternate) choices. We here provide support for the idea that such choice history biases arise from the context-dependent accumulation of a quantity referred to as the decision variable: the variable's sign dictates the choice and its magnitude the confidence about choice correctness. We show that choices are accumulated in an action-independent format and a context-dependent manner, weighted by the confidence about their correctness. This confidence-weighted accumulation of choices enables decision-makers to flexibly adjust their behavior to different sensory environments. The bias adjustment can be as important for optimizing performance as one's sensitivity to the momentary sensory input. Copyright © 2018 Braun et al.

  5. Enhanced Pure-Tone Pitch Discrimination among Persons with Autism but not Asperger Syndrome

    ERIC Educational Resources Information Center

    Bonnel, Anna; McAdams, Stephen; Smith, Bennett; Berthiaume, Claude; Bertone, Armando; Ciocca, Valter; Burack, Jacob A.; Mottron, Laurent

    2010-01-01

    Persons with Autism spectrum disorders (ASD) display atypical perceptual processing in visual and auditory tasks. In vision, Bertone, Mottron, Jelenic, and Faubert (2005) found that enhanced and diminished visual processing is linked to the level of neural complexity required to process stimuli, as proposed in the neural complexity hypothesis.…

  6. Influence of Stimulus Symmetry and Complexity upon Haptic Scanning Strategies During Detection, Learning and Recognition Tasks.

    ERIC Educational Resources Information Center

    Locher, Paul J.; Simmons, Roger W.

    Two experiments were conducted to investigate the perceptual processes involved in haptic exploration of randomly generated shapes. Experiment one required subjects to detect symmetrical or asymmetrical characteristics of individually presented plastic shapes, also varying in complexity. Scanning time for both symmetrical and asymmetrical shapes…

  7. Event Boundaries in Perception Affect Memory Encoding and Updating

    PubMed Central

    Swallow, Khena M.; Zacks, Jeffrey M.; Abrams, Richard A.

    2010-01-01

    Memory for naturalistic events over short delays is important for visual scene processing, reading comprehension, and social interaction. The research presented here examined relations between how an ongoing activity is perceptually segmented into events and how those events are remembered a few seconds later. In several studies participants watched movie clips that presented objects in the context of goal-directed activities. Five seconds after an object was presented, the clip paused for a recognition test. Performance on the recognition test depended on the occurrence of perceptual event boundaries. Objects that were present when an event boundary occurred were better recognized than other objects, suggesting that event boundaries structure the contents of memory. This effect was strongest when an object’s type was tested, but was also observed for objects’ perceptual features. Memory also depended on whether an event boundary occurred between presentation and test; this variable produced complex interactive effects that suggested that the contents of memory are updated at event boundaries. These data indicate that perceptual event boundaries have immediate consequences for what, when, and how easily information can be remembered. PMID:19397382

  8. Temporal factors affecting somatosensory–auditory interactions in speech processing

    PubMed Central

    Ito, Takayuki; Gracco, Vincent L.; Ostry, David J.

    2014-01-01

    Speech perception is known to rely on both auditory and visual information. However, sound-specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009). In the present study, we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory–auditory interaction in speech perception. We examined the changes in event-related potentials (ERPs) in response to multisensory synchronous (simultaneous) and asynchronous (90 ms lag and lead) somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the ERP was reliably different from the two unisensory potentials. More importantly, the magnitude of the ERP difference varied as a function of the relative timing of the somatosensory–auditory stimulation. Event-related activity change due to stimulus timing was seen between 160 and 220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory–auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production. PMID:25452733

  9. Neural mechanisms of human perceptual choice under focused and divided attention.

    PubMed

    Wyart, Valentin; Myers, Nicholas E; Summerfield, Christopher

    2015-02-25

    Perceptual decisions occur after the evaluation and integration of momentary sensory inputs, and dividing attention between spatially disparate sources of information impairs decision performance. However, it remains unknown whether dividing attention degrades the precision of sensory signals, precludes their conversion into decision signals, or dampens the integration of decision information toward an appropriate response. Here we recorded human electroencephalographic (EEG) activity while participants categorized one of two simultaneous and independent streams of visual gratings according to their average tilt. By analyzing trial-by-trial correlations between EEG activity and the information offered by each sample, we obtained converging behavioral and neural evidence that dividing attention between left and right visual fields does not dampen the encoding of sensory or decision information. Under divided attention, momentary decision information from both visual streams was encoded in slow parietal signals without interference but was lost downstream during their integration as reflected in motor mu- and beta-band (10-30 Hz) signals, resulting in a "leaky" accumulation process that conferred greater behavioral influence to more recent samples. By contrast, sensory inputs that were explicitly cued as irrelevant were not converted into decision signals. These findings reveal that a late cognitive bottleneck on information integration limits decision performance under divided attention, and places new capacity constraints on decision-theoretic models of information integration under cognitive load. Copyright © 2015 the authors 0270-6474/15/353485-14$15.00/0.

  10. Neural mechanisms of human perceptual choice under focused and divided attention

    PubMed Central

    Wyart, Valentin; Myers, Nicholas E.; Summerfield, Christopher

    2015-01-01

    Perceptual decisions occur after evaluation and integration of momentary sensory inputs, and dividing attention between spatially disparate sources of information impairs decision performance. However, it remains unknown whether dividing attention degrades the precision of sensory signals, precludes their conversion into decision signals, or dampens the integration of decision information towards an appropriate response. Here we recorded human electroencephalographic (EEG) activity whilst participants categorised one of two simultaneous and independent streams of visual gratings according to their average tilt. By analyzing trial-by-trial correlations between EEG activity and the information offered by each sample, we obtained converging behavioural and neural evidence that dividing attention between left and right visual fields does not dampen the encoding of sensory or decision information. Under divided attention, momentary decision information from both visual streams was encoded in slow parietal signals without interference but was lost downstream during their integration as reflected in motor mu- and beta-band (10–30 Hz) signals, resulting in a ‘leaky’ accumulation process which conferred greater behavioural influence to more recent samples. By contrast, sensory inputs that were explicitly cued as irrelevant were not converted into decision signals. These findings reveal that a late cognitive bottleneck on information integration limits decision performance under divided attention, and place new capacity constraints on decision-theoretic models of information integration under cognitive load. PMID:25716848

  11. Objective instrumental memory and performance tests for evaluation of patients with brain damage: a search for a behavioral diagnostic tool.

    PubMed

    Harness, B Z; Bental, E; Carmon, A

    1976-03-01

    Cognition and performance of patients with localized and diffuse brain damage was evaluated through the application of objective perceptual testing. A series of visual perceptual and verbal tests, memory tests, as well as reaction time tasks were administered to the patients by logic programming equipment. In order to avoid a bias due to communicative disorders, all responses were motor, and achievement was scored in terms of correct identification and latencies of response. Previously established norms based on a large sample of non-brain-damaged hospitalized patients served to standardize the performance of the brain-damaged patient since preliminary results showed that age and educational level constitute an important variable affecting performance of the control group. The achievement of brain-damaged patients, corrected for these factors, was impaired significantly in all tests with respect to both recognition and speed of performance. Lateralized effects of brain damage were not significantly demonstrated. However, when the performance was analyzed with respect to the locus of visual input, it was found that patients with right hemispheric lesions showed impairment mainly on perception of figurative material, and that this deficit was more apparent in the left visual field. Conversely, patients with left hemispheric lesions tended to show impairment on perception of visually presented verbal material when the input was delivered to the right visual field.

  12. Perception of the Body in Space: Mechanisms

    NASA Technical Reports Server (NTRS)

    Young, Laurence R.

    1991-01-01

    The principal topic is the perception of body orientation and motion in space and the extent to which these perceptual abstraction can be related directly to the knowledge of sensory mechanisms, particularly for the vestibular apparatus. Spatial orientation is firmly based on the underlying sensory mechanisms and their central integration. For some of the simplest situations, like rotation about a vertical axis in darkness, the dynamic response of the semicircular canals furnishes almost enough information to explain the sensations of turning and stopping. For more complex conditions involving multiple sensory systems and possible conflicts among their messages, a mechanistic response requires significant speculative assumptions. The models that exist for multisensory spatial orientation are still largely of the non-rational parameter variety. They are capable of predicting relationships among input motions and output perceptions of motion, but they involve computational functions that do not now and perhaps never will have their counterpart in central nervous system machinery. The challenge continues to be in the iterative process of testing models by experiment, correcting them where necessary, and testing them again.

  13. The effect of saccade metrics on the corollary discharge contribution to perceived eye location

    PubMed Central

    Bansal, Sonia; Jayet Bray, Laurence C.; Peterson, Matthew S.

    2015-01-01

    Corollary discharge (CD) is hypothesized to provide the movement information (direction and amplitude) required to compensate for the saccade-induced disruptions to visual input. Here, we investigated to what extent these conveyed metrics influence perceptual stability in human subjects with a target-displacement detection task. Subjects made saccades to targets located at different amplitudes (4°, 6°, or 8°) and directions (horizontal or vertical). During the saccade, the target disappeared and then reappeared at a shifted location either in the same direction or opposite to the movement vector. Subjects reported the target displacement direction, and from these reports we determined the perceptual threshold for shift detection and estimate of target location. Our results indicate that the thresholds for all amplitudes and directions generally scaled with saccade amplitude. Additionally, subjects on average produced hypometric saccades with an estimated CD gain <1. Finally, we examined the contribution of different error signals to perceptual performance, the saccade error (movement-to-movement variability in saccade amplitude) and visual error (distance between the fovea and the shifted target location). Perceptual judgment was not influenced by the fluctuations in movement amplitude, and performance was largely the same across movement directions for different magnitudes of visual error. Importantly, subjects reported the correct direction of target displacement above chance level for very small visual errors (<0.75°), even when these errors were opposite the target-shift direction. Collectively, these results suggest that the CD-based compensatory mechanisms for visual disruptions are highly accurate and comparable for saccades with different metrics. PMID:25761955

  14. Shared sensory estimates for human motion perception and pursuit eye movements.

    PubMed

    Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio; Osborne, Leslie C

    2015-06-03

    Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. Copyright © 2015 the authors 0270-6474/15/358515-16$15.00/0.

  15. Shared Sensory Estimates for Human Motion Perception and Pursuit Eye Movements

    PubMed Central

    Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio

    2015-01-01

    Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. PMID:26041919

  16. Processing Mechanisms in Hearing-Impaired Listeners: Evidence from Reaction Times and Sentence Interpretation.

    PubMed

    Carroll, Rebecca; Uslar, Verena; Brand, Thomas; Ruigendijk, Esther

    The authors aimed to determine whether hearing impairment affects sentence comprehension beyond phoneme or word recognition (i.e., on the sentence level), and to distinguish grammatically induced processing difficulties in structurally complex sentences from perceptual difficulties associated with listening to degraded speech. Effects of hearing impairment or speech in noise were expected to reflect hearer-specific speech recognition difficulties. Any additional processing time caused by the sustained perceptual challenges across the sentence may either be independent of or interact with top-down processing mechanisms associated with grammatical sentence structure. Forty-nine participants listened to canonical subject-initial or noncanonical object-initial sentences that were presented either in quiet or in noise. Twenty-four participants had mild-to-moderate hearing impairment and received hearing-loss-specific amplification. Twenty-five participants were age-matched peers with normal hearing status. Reaction times were measured on-line at syntactically critical processing points as well as two control points to capture differences in processing mechanisms. An off-line comprehension task served as an additional indicator of sentence (mis)interpretation, and enforced syntactic processing. The authors found general effects of hearing impairment and speech in noise that negatively affected perceptual processing, and an effect of word order, where complex grammar locally caused processing difficulties for the noncanonical sentence structure. Listeners with hearing impairment were hardly affected by noise at the beginning of the sentence, but were affected markedly toward the end of the sentence, indicating a sustained perceptual effect of speech recognition. Comprehension of sentences with noncanonical word order was negatively affected by degraded signals even after sentence presentation. Hearing impairment adds perceptual processing load during sentence processing, but affects grammatical processing beyond the word level to the same degree as in normal hearing, with minor differences in processing mechanisms. The data contribute to our understanding of individual differences in speech perception and language understanding. The authors interpret their results within the ease of language understanding model.

  17. Perceptual advertisement by the prey of stalking or ambushing predators.

    PubMed

    Broom, Mark; Ruxton, Graeme D

    2012-12-21

    There has been previous theoretical explorations of the stability of signals by prey that they have detected a stalking or ambush predator, where such perceptual advertisement dissuades the predator from attacking. Here we use a game theoretical model to extend the theory to consider some empirically-motivated complexities: (i) many perceptual advertisement signals appear to have the potential to vary in intensity, (ii) higher intensity signals are likely to be most costly to produce, and (iii) some high-cost signals (such as staring directly at the predator) can only be utilised if the prey is very confident of the existence of a nearby predator (that is, there are reserved or unfakable signals). We demonstrate that these complexities still allow for stable signalling. However, we do not find solutions where prey use a range of signal intensities to signal different degrees of confidence in the proximity of a predator; with prey simply adopting a binary response of not signalling or always signalling at the same fixed level. However this fixed level will not always be the cheapest possible signal, and we predict that prey that require more certainty about proximity of a predator will use higher-cost signals. The availability of reserved signals does not prohibit the stability of signalling based on lower-cost signals, but we also find circumstances where only the reserved signal is used. We discuss the potential to empirically test our model predictions, and to develop theory further to allow perceptual advertisement to be combined with other signalling functions. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Low-Frequency Cortical Oscillations Entrain to Subthreshold Rhythmic Auditory Stimuli

    PubMed Central

    Schroeder, Charles E.; Poeppel, David; van Atteveldt, Nienke

    2017-01-01

    Many environmental stimuli contain temporal regularities, a feature that can help predict forthcoming input. Phase locking (entrainment) of ongoing low-frequency neuronal oscillations to rhythmic stimuli is proposed as a potential mechanism for enhancing neuronal responses and perceptual sensitivity, by aligning high-excitability phases to events within a stimulus stream. Previous experiments show that rhythmic structure has a behavioral benefit even when the rhythm itself is below perceptual detection thresholds (ten Oever et al., 2014). It is not known whether this “inaudible” rhythmic sound stream also induces entrainment. Here we tested this hypothesis using magnetoencephalography and electrocorticography in humans to record changes in neuronal activity as subthreshold rhythmic stimuli gradually became audible. We found that significant phase locking to the rhythmic sounds preceded participants' detection of them. Moreover, no significant auditory-evoked responses accompanied this prethreshold entrainment. These auditory-evoked responses, distinguished by robust, broad-band increases in intertrial coherence, only appeared after sounds were reported as audible. Taken together with the reduced perceptual thresholds observed for rhythmic sequences, these findings support the proposition that entrainment of low-frequency oscillations serves a mechanistic role in enhancing perceptual sensitivity for temporally predictive sounds. This framework has broad implications for understanding the neural mechanisms involved in generating temporal predictions and their relevance for perception, attention, and awareness. SIGNIFICANCE STATEMENT The environment is full of rhythmically structured signals that the nervous system can exploit for information processing. Thus, it is important to understand how the brain processes such temporally structured, regular features of external stimuli. Here we report the alignment of slowly fluctuating oscillatory brain activity to external rhythmic structure before its behavioral detection. These results indicate that phase alignment is a general mechanism of the brain to process rhythmic structure and can occur without the perceptual detection of this temporal structure. PMID:28411273

  19. The Interaction of Ambient Frequency and Feature Complexity in the Diphthong Errors of Children with Phonological Disorders.

    ERIC Educational Resources Information Center

    Stokes, Stephanie F.; Lau, Jessica Tse-Kay; Ciocca, Valter

    2002-01-01

    This study examined the interaction of ambient frequency and feature complexity in the diphthong errors produced by 13 Cantonese-speaking children with phonological disorders. Perceptual analysis of 611 diphthongs identified those most frequently and least frequently in error. Suggested treatment guidelines include consideration of three factors:…

  20. The Development of Mental Models for Auditory Events: Relational Complexity and Discrimination of Pitch and Duration

    ERIC Educational Resources Information Center

    Stevens, Catherine; Gallagher, Melinda

    2004-01-01

    This experiment investigated relational complexity and relational shift in judgments of auditory patterns. Pitch and duration values were used to construct two-note perceptually similar sequences (unary relations) and four-note relationally similar sequences (binary relations). It was hypothesized that 5-, 8- and 11-year-old children would perform…

  1. The stream of experience when watching artistic movies. Dynamic aesthetic effects revealed by the Continuous Evaluation Procedure (CEP).

    PubMed

    Muth, Claudia; Raab, Marius H; Carbon, Claus-Christian

    2015-01-01

    Research in perception and appreciation is often focused on snapshots, stills of experience. Static approaches allow for multidimensional assessment, but are unable to catch the crucial dynamics of affective and perceptual processes; for instance, aesthetic phenomena such as the "Aesthetic-Aha" (the increase in liking after the sudden detection of Gestalt), effects of expectation, or Berlyne's idea that "disorientation" with a "promise of success" elicits interest. We conducted empirical studies on indeterminate artistic movies depicting the evolution and metamorphosis of Gestalt and investigated (i) the effects of sudden perceptual insights on liking; that is, "Aesthetic Aha"-effects, (ii) the dynamics of interest before moments of insight, and (iii) the dynamics of complexity before and after moments of insight. Via the so-called Continuous Evaluation Procedure (CEP) enabling analogous evaluation in a continuous way, participants assessed the material on two aesthetic dimensions blockwise either in a gallery or a laboratory. The material's inherent dynamics were described via assessments of liking, interest, determinacy, and surprise along with a computational analysis on the variable complexity. We identified moments of insight as peaks in determinacy and surprise. Statistically significant changes in liking and interest demonstrated that: (i) insights increase liking, (ii) interest already increases 1500 ms before such moments of insight, supporting the idea that it is evoked by an expectation of understanding, and (iii) insights occur during increasing complexity. We propose a preliminary model of dynamics in liking and interest with regard to complexity and perceptual insight and discuss descriptions of participants' experiences of insight. Our results point to the importance of systematic analyses of dynamics in art perception and appreciation.

  2. Let's Use Cognitive Science to Create Collaborative Workstations.

    PubMed

    Reicher, Murray A; Wolfe, Jeremy M

    2016-05-01

    When informed by an understanding of cognitive science, radiologists' workstations could become collaborative to improve radiologists' performance and job satisfaction. The authors review relevant literature and present several promising areas of research, including image toggling, eye tracking, cognitive computing, intelligently restricted messaging, work habit tracking, and innovative input devices. The authors call for more research in "perceptual design," a promising field that can complement advances in computer-aided detection. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  3. Predicting perceptual quality of images in realistic scenario using deep filter banks

    NASA Astrophysics Data System (ADS)

    Zhang, Weixia; Yan, Jia; Hu, Shiyong; Ma, Yang; Deng, Dexiang

    2018-03-01

    Classical image perceptual quality assessment models usually resort to natural scene statistic methods, which are based on an assumption that certain reliable statistical regularities hold on undistorted images and will be corrupted by introduced distortions. However, these models usually fail to accurately predict degradation severity of images in realistic scenarios since complex, multiple, and interactive authentic distortions usually appear on them. We propose a quality prediction model based on convolutional neural network. Quality-aware features extracted from filter banks of multiple convolutional layers are aggregated into the image representation. Furthermore, an easy-to-implement and effective feature selection strategy is used to further refine the image representation and finally a linear support vector regression model is trained to map image representation into images' subjective perceptual quality scores. The experimental results on benchmark databases present the effectiveness and generalizability of the proposed model.

  4. Perceptual Considerations in Icon Design for Instructional Communication.

    ERIC Educational Resources Information Center

    Lee, Shih-Chung

    1996-01-01

    Discusses the use of icons in computer interface design. Highlights include picture processing time, complexity, recognition memory, differences between picture icons and picture/text icons, the use of color, size, placement, and touch design. (LRW)

  5. Tone Language Speakers and Musicians Share Enhanced Perceptual and Cognitive Abilities for Musical Pitch: Evidence for Bidirectionality between the Domains of Language and Music

    PubMed Central

    Bidelman, Gavin M.; Hutka, Stefanie; Moreno, Sylvain

    2013-01-01

    Psychophysiological evidence suggests that music and language are intimately coupled such that experience/training in one domain can influence processing required in the other domain. While the influence of music on language processing is now well-documented, evidence of language-to-music effects have yet to be firmly established. Here, using a cross-sectional design, we compared the performance of musicians to that of tone-language (Cantonese) speakers on tasks of auditory pitch acuity, music perception, and general cognitive ability (e.g., fluid intelligence, working memory). While musicians demonstrated superior performance on all auditory measures, comparable perceptual enhancements were observed for Cantonese participants, relative to English-speaking nonmusicians. These results provide evidence that tone-language background is associated with higher auditory perceptual performance for music listening. Musicians and Cantonese speakers also showed superior working memory capacity relative to nonmusician controls, suggesting that in addition to basic perceptual enhancements, tone-language background and music training might also be associated with enhanced general cognitive abilities. Our findings support the notion that tone language speakers and musically trained individuals have higher performance than English-speaking listeners for the perceptual-cognitive processing necessary for basic auditory as well as complex music perception. These results illustrate bidirectional influences between the domains of music and language. PMID:23565267

  6. Word recognition and phonetic structure acquisition: Possible relations

    NASA Astrophysics Data System (ADS)

    Morgan, James

    2002-05-01

    Several accounts of possible relations between the emergence of the mental lexicon and acquisition of native language phonological structure have been propounded. In one view, acquisition of word meanings guides infants' attention toward those contrasts that are linguistically significant in their language. In the opposing view, native language phonological categories may be acquired from statistical patterns of input speech, prior to and independent of learning at the lexical level. Here, a more interactive account will be presented, in which phonological structure is modeled as emerging consequentially from the self-organization of perceptual space underlying word recognition. A key prediction of this model is that early native language phonological categories will be highly context specific. Data bearing on this prediction will be presented which provide clues to the nature of infants' statistical analysis of input.

  7. Optimal nonlinear codes for the perception of natural colours.

    PubMed

    von der Twer, T; MacLeod, D I

    2001-08-01

    We discuss how visual nonlinearity can be optimized for the precise representation of environmental inputs. Such optimization leads to neural signals with a compressively nonlinear input-output function the gradient of which is matched to the cube root of the probability density function (PDF) of the environmental input values (and not to the PDF directly as in histogram equalization). Comparisons between theory and psychophysical and electrophysiological data are roughly consistent with the idea that parvocellular (P) cells are optimized for precision representation of colour: their contrast-response functions span a range appropriately matched to the environmental distribution of natural colours along each dimension of colour space. Thus P cell codes for colour may have been selected to minimize error in the perceptual estimation of stimulus parameters for natural colours. But magnocellular (M) cells have a much stronger than expected saturating nonlinearity; this supports the view that the function of M cells is mainly to detect boundaries rather than to specify contrast or lightness.

  8. Suppressive and enhancing effects in early visual cortex during illusory shape perception: A comment on.

    PubMed

    Moors, Pieter

    2015-01-01

    In a recent functional magnetic resonance imaging study, Kok and de Lange (2014) observed that BOLD activity for a Kanizsa illusory shape stimulus, in which pacmen-like inducers elicit an illusory shape percept, was either enhanced or suppressed relative to a nonillusory control configuration depending on whether the spatial profile of BOLD activity in early visual cortex was related to the illusory shape or the inducers, respectively. The authors argued that these findings fit well with the predictive coding framework, because top-down predictions related to the illusory shape are not met with bottom-up sensory input and hence the feedforward error signal is enhanced. Conversely, for the inducing elements, there is a match between top-down predictions and input, leading to a decrease in error. Rather than invoking predictive coding as the explanatory framework, the suppressive effect related to the inducers might be caused by neural adaptation to perceptually stable input due to the trial sequence used in the experiment.

  9. Multisensory temporal function and EEG complexity in patients with epilepsy and psychogenic nonepileptic events.

    PubMed

    Noel, Jean-Paul; Kurela, LeAnne; Baum, Sarah H; Yu, Hong; Neimat, Joseph S; Gallagher, Martin J; Wallace, Mark

    2017-05-01

    Cognitive and perceptual comorbidities frequently accompany epilepsy and psychogenic nonepileptic events (PNEE). However, and despite the fact that perceptual function is built upon a multisensory foundation, little knowledge exists concerning multisensory function in these populations. Here, we characterized facets of multisensory processing abilities in patients with epilepsy and PNEE, and probed the relationship between individual resting-state EEG complexity and these psychophysical measures in each patient. We prospectively studied a cohort of patients with epilepsy (N=18) and PNEE (N=20) patients who were admitted to Vanderbilt's Epilepsy Monitoring Unit (EMU) and weaned off of anticonvulsant drugs. Unaffected age-matched persons staying with the patients in the EMU (N=15) were also recruited as controls. All participants performed two tests of multisensory function: an audio-visual simultaneity judgment and an audio-visual redundant target task. Further, in the cohort of patients with epilepsy and PNEE we quantified resting state EEG gamma power and complexity. Compared with both patients with epilepsy and control subjects, patients with PNEE exhibited significantly poorer acuity in audiovisual temporal function as evidenced in significantly larger temporal binding windows (i.e., they perceived larger stimulus asynchronies as being presented simultaneously). These differences appeared to be specific for temporal function, as there was no difference among the three groups in a non-temporally based measure of multisensory function - the redundant target task. Further, patients with PNEE exhibited more complex resting state EEG patterns as compared to their patients with epilepsy, and EEG complexity correlated with multisensory temporal performance on a subject-by-subject manner. Taken together, findings seem to indicate that patients with PNEE bind information from audition and vision over larger temporal intervals when compared with control subjects as well as patients with epilepsy. This difference in multisensory function appears to be specific to the temporal domain, and may be a contributing factor to the behavioral and perceptual alterations seen in this population. Published by Elsevier Inc.

  10. Task-dependent recurrent dynamics in visual cortex

    PubMed Central

    Tajima, Satohiro; Koida, Kowa; Tajima, Chihiro I; Suzuki, Hideyuki; Aihara, Kazuyuki; Komatsu, Hidehiko

    2017-01-01

    The capacity for flexible sensory-action association in animals has been related to context-dependent attractor dynamics outside the sensory cortices. Here, we report a line of evidence that flexibly modulated attractor dynamics during task switching are already present in the higher visual cortex in macaque monkeys. With a nonlinear decoding approach, we can extract the particular aspect of the neural population response that reflects the task-induced emergence of bistable attractor dynamics in a neural population, which could be obscured by standard unsupervised dimensionality reductions such as PCA. The dynamical modulation selectively increases the information relevant to task demands, indicating that such modulation is beneficial for perceptual decisions. A computational model that features nonlinear recurrent interaction among neurons with a task-dependent background input replicates the key properties observed in the experimental data. These results suggest that the context-dependent attractor dynamics involving the sensory cortex can underlie flexible perceptual abilities. DOI: http://dx.doi.org/10.7554/eLife.26868.001 PMID:28737487

  11. Enhanced attention amplifies face adaptation.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Evangelista, Emma; Ewing, Louise; Peters, Marianne; Taylor, Libby

    2011-08-15

    Perceptual adaptation not only produces striking perceptual aftereffects, but also enhances coding efficiency and discrimination by calibrating coding mechanisms to prevailing inputs. Attention to simple stimuli increases adaptation, potentially enhancing its functional benefits. Here we show that attention also increases adaptation to faces. In Experiment 1, face identity aftereffects increased when attention to adapting faces was increased using a change detection task. In Experiment 2, figural (distortion) face aftereffects increased when attention was increased using a snap game (detecting immediate repeats) during adaptation. Both were large effects. Contributions of low-level adaptation were reduced using free viewing (both experiments) and a size change between adapt and test faces (Experiment 2). We suggest that attention may enhance adaptation throughout the entire cortical visual pathway, with functional benefits well beyond the immediate advantages of selective processing of potentially important stimuli. These results highlight the potential to facilitate adaptive updating of face-coding mechanisms by strategic deployment of attentional resources. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Unexpected arousal modulates the influence of sensory noise on confidence

    PubMed Central

    Allen, Micah; Frank, Darya; Schwarzkopf, D Samuel; Fardo, Francesca; Winston, Joel S; Hauser, Tobias U; Rees, Geraint

    2016-01-01

    Human perception is invariably accompanied by a graded feeling of confidence that guides metacognitive awareness and decision-making. It is often assumed that this arises solely from the feed-forward encoding of the strength or precision of sensory inputs. In contrast, interoceptive inference models suggest that confidence reflects a weighted integration of sensory precision and expectations about internal states, such as arousal. Here we test this hypothesis using a novel psychophysical paradigm, in which unseen disgust-cues induced unexpected, unconscious arousal just before participants discriminated motion signals of variable precision. Across measures of perceptual bias, uncertainty, and physiological arousal we found that arousing disgust cues modulated the encoding of sensory noise. Furthermore, the degree to which trial-by-trial pupil fluctuations encoded this nonlinear interaction correlated with trial level confidence. Our results suggest that unexpected arousal regulates perceptual precision, such that subjective confidence reflects the integration of both external sensory and internal, embodied states. DOI: http://dx.doi.org/10.7554/eLife.18103.001 PMID:27776633

  13. Modeling the Effects of Perceptual Load: Saliency, Competitive Interactions, and Top-Down Biases.

    PubMed

    Neokleous, Kleanthis; Shimi, Andria; Avraamides, Marios N

    2016-01-01

    A computational model of visual selective attention has been implemented to account for experimental findings on the Perceptual Load Theory (PLT) of attention. The model was designed based on existing neurophysiological findings on attentional processes with the objective to offer an explicit and biologically plausible formulation of PLT. Simulation results verified that the proposed model is capable of capturing the basic pattern of results that support the PLT as well as findings that are considered contradictory to the theory. Importantly, the model is able to reproduce the behavioral results from a dilution experiment, providing thus a way to reconcile PLT with the competing Dilution account. Overall, the model presents a novel account for explaining PLT effects on the basis of the low-level competitive interactions among neurons that represent visual input and the top-down signals that modulate neural activity. The implications of the model concerning the debate on the locus of selective attention as well as the origins of distractor interference in visual displays of varying load are discussed.

  14. Amplitude-modulation detection by gerbils in reverberant sound fields.

    PubMed

    Lingner, Andrea; Kugler, Kathrin; Grothe, Benedikt; Wiegrebe, Lutz

    2013-08-01

    Reverberation can dramatically reduce the depth of amplitude modulations which are critical for speech intelligibility. Psychophysical experiments indicate that humans' sensitivity to amplitude modulation in reverberation is better than predicted from the acoustic modulation depth at the receiver position. Electrophysiological studies on reverberation in rabbits highlight the contribution of neurons sensitive to interaural correlation. Here, we use a prepulse-inhibition paradigm to quantify the gerbils' amplitude modulation threshold in both anechoic and reverberant virtual environments. Data show that prepulse inhibition provides a reliable method for determining the gerbils' AM sensitivity. However, we find no evidence for perceptual restoration of amplitude modulation in reverberation. Instead, the deterioration of AM sensitivity in reverberant conditions can be quantitatively explained by the reduced modulation depth at the receiver position. We suggest that the lack of perceptual restoration is related to physical properties of the gerbil's ear input signals and inner-ear processing as opposed to shortcomings of their binaural neural processing. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. A social Bouba/Kiki effect: A bias for people whose names match their faces.

    PubMed

    Barton, David N; Halberstadt, Jamin

    2018-06-01

    The "bouba/kiki effect" is the robust tendency to associate rounded objects (vs. angular objects) with names that require rounding of the mouth to pronounce, and may reflect synesthesia-like mapping across perceptual modalities. Here we show for the first time a "social" bouba/kiki effect, such that experimental participants associate round names ("Bob," "Lou") with round-faced (vs. angular-faced) individuals. Moreover, consistent with a bias for expectancy-consistent information, we find that participants like targets with "matching" names, both when name-face fit is measured and when it is experimentally manipulated. Finally, we show that such bias could have important practical consequences: An analysis of voting data reveals that Senatorial candidates earn 10% more votes when their names fit their faces very well, versus very poorly. These and similar cross-modal congruencies suggest that social judgment involves not only amodal application of stored information (e.g., stereotypes) to new stimuli, but also integration of perceptual and bodily input.

  16. Adult Visual Cortical Plasticity

    PubMed Central

    Gilbert, Charles D.; Li, Wu

    2012-01-01

    The visual cortex has the capacity for experience dependent change, or cortical plasticity, that is retained throughout life. Plasticity is invoked for encoding information during perceptual learning, by internally representing the regularities of the visual environment, which is useful for facilitating intermediate level vision - contour integration and surface segmentation. The same mechanisms have adaptive value for functional recovery after CNS damage, such as that associated with stroke or neurodegenerative disease. A common feature to plasticity in primary visual cortex (V1) is an association field that links contour elements across the visual field. The circuitry underlying the association field includes a plexus of long range horizontal connections formed by cortical pyramidal cells. These connections undergo rapid and exuberant sprouting and pruning in response to removal of sensory input, which can account for the topographic reorganization following retinal lesions. Similar alterations in cortical circuitry may be involved in perceptual learning, and the changes observed in V1 may be representative of how learned information is encoded throughout the cerebral cortex. PMID:22841310

  17. Motion coherence affects human perception and pursuit similarly.

    PubMed

    Beutter, B R; Stone, L S

    2000-01-01

    Pursuit and perception both require accurate information about the motion of objects. Recovering the motion of objects by integrating the motion of their components is a difficult visual task. Successful integration produces coherent global object motion, while a failure to integrate leaves the incoherent local motions of the components unlinked. We compared the ability of perception and pursuit to perform motion integration by measuring direction judgments and the concomitant eye-movement responses to line-figure parallelograms moving behind stationary rectangular apertures. The apertures were constructed such that only the line segments corresponding to the parallelogram's sides were visible; thus, recovering global motion required the integration of the local segment motion. We investigated several potential motion-integration rules by using stimuli with different object, vector-average, and line-segment terminator-motion directions. We used an oculometric decision rule to directly compare direction discrimination for pursuit and perception. For visible apertures, the percept was a coherent object, and both the pursuit and perceptual performance were close to the object-motion prediction. For invisible apertures, the percept was incoherently moving segments, and both the pursuit and perceptual performance were close to the terminator-motion prediction. Furthermore, both psychometric and oculometric direction thresholds were much higher for invisible apertures than for visible apertures. We constructed a model in which both perception and pursuit are driven by a shared motion-processing stage, with perception having an additional input from an independent static-processing stage. Model simulations were consistent with our perceptual and oculomotor data. Based on these results, we propose the use of pursuit as an objective and continuous measure of perceptual coherence. Our results support the view that pursuit and perception share a common motion-integration stage, perhaps within areas MT or MST.

  18. Tracking neural coding of perceptual and semantic features of concrete nouns

    PubMed Central

    Sudre, Gustavo; Pomerleau, Dean; Palatucci, Mark; Wehbe, Leila; Fyshe, Alona; Salmelin, Riitta; Mitchell, Tom

    2015-01-01

    We present a methodological approach employing magnetoencephalography (MEG) and machine learning techniques to investigate the flow of perceptual and semantic information decodable from neural activity in the half second during which the brain comprehends the meaning of a concrete noun. Important information about the cortical location of neural activity related to the representation of nouns in the human brain has been revealed by past studies using fMRI. However, the temporal sequence of processing from sensory input to concept comprehension remains unclear, in part because of the poor time resolution provided by fMRI. In this study, subjects answered 20 questions (e.g. is it alive?) about the properties of 60 different nouns prompted by simultaneous presentation of a pictured item and its written name. Our results show that the neural activity observed with MEG encodes a variety of perceptual and semantic features of stimuli at different times relative to stimulus onset, and in different cortical locations. By decoding these features, our MEG-based classifier was able to reliably distinguish between two different concrete nouns that it had never seen before. The results demonstrate that there are clear differences between the time course of the magnitude of MEG activity and that of decodable semantic information. Perceptual features were decoded from MEG activity earlier in time than semantic features, and features related to animacy, size, and manipulability were decoded consistently across subjects. We also observed that regions commonly associated with semantic processing in the fMRI literature may not show high decoding results in MEG. We believe that this type of approach and the accompanying machine learning methods can form the basis for further modeling of the flow of neural information during language processing and a variety of other cognitive processes. PMID:22565201

  19. Motion coherence affects human perception and pursuit similarly

    NASA Technical Reports Server (NTRS)

    Beutter, B. R.; Stone, L. S.

    2000-01-01

    Pursuit and perception both require accurate information about the motion of objects. Recovering the motion of objects by integrating the motion of their components is a difficult visual task. Successful integration produces coherent global object motion, while a failure to integrate leaves the incoherent local motions of the components unlinked. We compared the ability of perception and pursuit to perform motion integration by measuring direction judgments and the concomitant eye-movement responses to line-figure parallelograms moving behind stationary rectangular apertures. The apertures were constructed such that only the line segments corresponding to the parallelogram's sides were visible; thus, recovering global motion required the integration of the local segment motion. We investigated several potential motion-integration rules by using stimuli with different object, vector-average, and line-segment terminator-motion directions. We used an oculometric decision rule to directly compare direction discrimination for pursuit and perception. For visible apertures, the percept was a coherent object, and both the pursuit and perceptual performance were close to the object-motion prediction. For invisible apertures, the percept was incoherently moving segments, and both the pursuit and perceptual performance were close to the terminator-motion prediction. Furthermore, both psychometric and oculometric direction thresholds were much higher for invisible apertures than for visible apertures. We constructed a model in which both perception and pursuit are driven by a shared motion-processing stage, with perception having an additional input from an independent static-processing stage. Model simulations were consistent with our perceptual and oculomotor data. Based on these results, we propose the use of pursuit as an objective and continuous measure of perceptual coherence. Our results support the view that pursuit and perception share a common motion-integration stage, perhaps within areas MT or MST.

  20. Is the Web as good as the lab? Comparable performance from Web and lab in cognitive/perceptual experiments.

    PubMed

    Germine, Laura; Nakayama, Ken; Duchaine, Bradley C; Chabris, Christopher F; Chatterjee, Garga; Wilmer, Jeremy B

    2012-10-01

    With the increasing sophistication and ubiquity of the Internet, behavioral research is on the cusp of a revolution that will do for population sampling what the computer did for stimulus control and measurement. It remains a common assumption, however, that data from self-selected Web samples must involve a trade-off between participant numbers and data quality. Concerns about data quality are heightened for performance-based cognitive and perceptual measures, particularly those that are timed or that involve complex stimuli. In experiments run with uncompensated, anonymous participants whose motivation for participation is unknown, reduced conscientiousness or lack of focus could produce results that would be difficult to interpret due to decreased overall performance, increased variability of performance, or increased measurement noise. Here, we addressed the question of data quality across a range of cognitive and perceptual tests. For three key performance metrics-mean performance, performance variance, and internal reliability-the results from self-selected Web samples did not differ systematically from those obtained from traditionally recruited and/or lab-tested samples. These findings demonstrate that collecting data from uncompensated, anonymous, unsupervised, self-selected participants need not reduce data quality, even for demanding cognitive and perceptual experiments.

  1. Development of a vocabulary of object shapes in a child with a very-early-acquired visual agnosia: a unique case.

    PubMed

    Funnell, Elaine; Wilding, John

    2011-02-01

    We report a longitudinal study of an exceptional child (S.R.) whose early-acquired visual agnosia, following encephalitis at 8 weeks of age, did not prevent her from learning to construct an increasing vocabulary of visual object forms (drawn from different categories), albeit slowly. S.R. had problems perceiving subtle differences in shape; she was unable to segment local letters within global displays; and she would bring complex scenes close to her eyes: a symptom suggestive of an attempt to reduce visual crowding. Investigations revealed a robust ability to use the gestalt grouping factors of proximity and collinearity to detect fragmented forms in noisy backgrounds, compared with a very weak ability to segment fragmented forms on the basis of contrasts of shape. When contrasts in spatial grouping and shape were pitted against each other, shape made little contribution, consistent with problems in perceiving complex scenes, but when shape contrast was varied, and spatial grouping was held constant, S.R. showed the same hierarchy of difficulty as the controls, although her responses were slowed. This is the first report of a child's visual-perceptual development following very early neurological impairments to the visual cortex. Her ability to learn to perceive visual shape following damage at a rudimentary stage of perceptual development contrasts starkly with the loss of such ability in childhood cases of acquired visual agnosia that follow damage to the established perceptual system. Clearly, there is a critical period during which neurological damage to the highly active, early developing visual-perceptual system does not prevent but only impairs further learning.

  2. Domain-specific perceptual causality in children depends on the spatio-temporal configuration, not motion onset

    PubMed Central

    Schlottmann, Anne; Cole, Katy; Watts, Rhianna; White, Marina

    2013-01-01

    Humans, even babies, perceive causality when one shape moves briefly and linearly after another. Motion timing is crucial in this and causal impressions disappear with short delays between motions. However, the role of temporal information is more complex: it is both a cue to causality and a factor that constrains processing. It affects ability to distinguish causality from non-causality, and social from mechanical causality. Here we study both issues with 3- to 7-year-olds and adults who saw two computer-animated squares and chose if a picture of mechanical, social or non-causality fit each event best. Prior work fit with the standard view that early in development, the distinction between the social and physical domains depends mainly on whether or not the agents make contact, and that this reflects concern with domain-specific motion onset, in particular, whether the motion is self-initiated or not. The present experiments challenge both parts of this position. In Experiments 1 and 2, we showed that not just spatial, but also animacy and temporal information affect how children distinguish between physical and social causality. In Experiments 3 and 4 we showed that children do not seem to use spatio-temporal information in perceptual causality to make inferences about self- or other-initiated motion onset. Overall, spatial contact may be developmentally primary in domain-specific perceptual causality in that it is processed easily and is dominant over competing cues, but it is not the only cue used early on and it is not used to infer motion onset. Instead, domain-specific causal impressions may be automatic reactions to specific perceptual configurations, with a complex role for temporal information. PMID:23874308

  3. A Trainable Hearing Aid Algorithm Reflecting Individual Preferences for Degree of Noise-Suppression, Input Sound Level, and Listening Situation.

    PubMed

    Yoon, Sung Hoon; Nam, Kyoung Won; Yook, Sunhyun; Cho, Baek Hwan; Jang, Dong Pyo; Hong, Sung Hwa; Kim, In Young

    2017-03-01

    In an effort to improve hearing aid users' satisfaction, recent studies on trainable hearing aids have attempted to implement one or two environmental factors into training. However, it would be more beneficial to train the device based on the owner's personal preferences in a more expanded environmental acoustic conditions. Our study aimed at developing a trainable hearing aid algorithm that can reflect the user's individual preferences in a more extensive environmental acoustic conditions (ambient sound level, listening situation, and degree of noise suppression) and evaluated the perceptual benefit of the proposed algorithm. Ten normal hearing subjects participated in this study. Each subjects trained the algorithm to their personal preference and the trained data was used to record test sounds in three different settings to be utilized to evaluate the perceptual benefit of the proposed algorithm by performing the Comparison Mean Opinion Score test. Statistical analysis revealed that of the 10 subjects, four showed significant differences in amplification constant settings between the noise-only and speech-in-noise situation ( P <0.05) and one subject also showed significant difference between the speech-only and speech-in-noise situation ( P <0.05). Additionally, every subject preferred different β settings for beamforming in all different input sound levels. The positive findings from this study suggested that the proposed algorithm has potential to improve hearing aid users' personal satisfaction under various ambient situations.

  4. Postural Control Disturbances Produced By Exposure to HMD and Dome Vr Systems

    NASA Technical Reports Server (NTRS)

    Harm, D. L.; Taylor, L. C.

    2005-01-01

    Two critical and unresolved human factors issues in VR systems are: 1) potential "cybersickness", a form of motion sickness which is experienced in virtual worlds, and 2) maladaptive sensorimotor performance following exposure to VR systems. Interestingly, these aftereffects are often quite similar to adaptive sensorimotor responses observed in astronauts during and/or following space flight. Most astronauts and cosmonauts experience perceptual and sensorimotor disturbances during and following space flight. All astronauts exhibit decrements in postural control following space flight. It has been suggested that training in virtual reality (VR) may be an effective countermeasure for minimizing perceptual and/or sensorimotor disturbances. People adapt to consistent, sustained alterations of sensory input such as those produced by microgravity, and experimentally-produced stimulus rearrangements (e.g., reversing prisms, magnifying lenses, flight simulators, and VR systems). Adaptation is revealed by aftereffects including perceptual disturbances and sensorimotor control disturbances. The purpose of the current study was to compare disturbances in postural control produced by dome and head-mounted virtual environment displays. Individuals recovered from motion sickness and the detrimental effects of exposure to virtual reality on postural control within one hour. Sickness severity and initial decrements in postural equilibrium decreases over days, which suggests that subjects become dual-adapted over time. These findings provide some direction for developing training schedules for VR users that facilitate adaptation, and address safety concerns about aftereffects.

  5. Cognitive architecture of perceptual organization: from neurons to gnosons.

    PubMed

    van der Helm, Peter A

    2012-02-01

    What, if anything, is cognitive architecture and how is it implemented in neural architecture? Focusing on perceptual organization, this question is addressed by way of a pluralist approach which, supported by metatheoretical considerations, combines complementary insights from representational, connectionist, and dynamic systems approaches to cognition. This pluralist approach starts from a representationally inspired model which implements the intertwined but functionally distinguishable subprocesses of feedforward feature encoding, horizontal feature binding, and recurrent feature selection. As sustained by a review of neuroscientific evidence, these are the subprocesses that are believed to take place in the visual hierarchy in the brain. Furthermore, the model employs a special form of processing, called transparallel processing, whose neural signature is proposed to be gamma-band synchronization in transient horizontal neural assemblies. In neuroscience, such assemblies are believed to mediate binding of similar features. Their formal counterparts in the model are special input-dependent distributed representations, called hyperstrings, which allow many similar features to be processed in a transparallel fashion, that is, simultaneously as if only one feature were concerned. This form of processing does justice to both the high combinatorial capacity and the high speed of the perceptual organization process. A naturally following proposal is that those temporarily synchronized neural assemblies are "gnosons", that is, constituents of flexible self-organizing cognitive architecture in between the relatively rigid level of neurons and the still elusive level of consciousness.

  6. Very low-frequency signals support perceptual organization of implant-simulated speech for adults and children

    PubMed Central

    Nittrouer, Susan; Tarr, Eric; Bolster, Virginia; Caldwell-Tarr, Amanda; Moberly, Aaron C.; Lowenstein, Joanna H.

    2014-01-01

    Objective Using signals processed to simulate speech received through cochlear implants and low-frequency extended hearing aids, this study examined the proposal that low-frequency signals facilitate the perceptual organization of broader, spectrally degraded signals. Design In two experiments, words and sentences were presented in diotic and dichotic configurations as four-channel noise-vocoded signals (VOC-only), and as those signals combined with the acoustic signal below 250 Hz (LOW-plus). Dependent measures were percent correct recognition scores, and the difference between scores for the two processing conditions given as proportions of recognition scores for VOC-only. The influence of linguistic context was also examined. Study Sample Participants had normal hearing. In all, 40 adults, 40 7-year-olds, and 20 5-year-olds participated. Results Participants of all ages showed benefits of adding the low-frequency signal. The effect was greater for sentences than words, but no effect of configuration was found. The influence of linguistic context was similar across age groups, and did not contribute to the low-frequency effect. Listeners who scored more poorly with VOC-only stimuli showed greater low-frequency effects. Conclusion The benefit of adding a very low-frequency signal to a broader, spectrally degraded signal seems to derive from its facilitative influence on perceptual organization of the sensory input. PMID:24456179

  7. How attention influences perceptual decision making: Single-trial EEG correlates of drift-diffusion model parameters

    PubMed Central

    Nunez, Michael D.; Vandekerckhove, Joachim; Srinivasan, Ramesh

    2016-01-01

    Perceptual decision making can be accounted for by drift-diffusion models, a class of decision-making models that assume a stochastic accumulation of evidence on each trial. Fitting response time and accuracy to a drift-diffusion model produces evidence accumulation rate and non-decision time parameter estimates that reflect cognitive processes. Our goal is to elucidate the effect of attention on visual decision making. In this study, we show that measures of attention obtained from simultaneous EEG recordings can explain per-trial evidence accumulation rates and perceptual preprocessing times during a visual decision making task. Models assuming linear relationships between diffusion model parameters and EEG measures as external inputs were fit in a single step in a hierarchical Bayesian framework. The EEG measures were features of the evoked potential (EP) to the onset of a masking noise and the onset of a task-relevant signal stimulus. Single-trial evoked EEG responses, P200s to the onsets of visual noise and N200s to the onsets of visual signal, explain single-trial evidence accumulation and preprocessing times. Within-trial evidence accumulation variance was not found to be influenced by attention to the signal or noise. Single-trial measures of attention lead to better out-of-sample predictions of accuracy and correct reaction time distributions for individual subjects. PMID:28435173

  8. How attention influences perceptual decision making: Single-trial EEG correlates of drift-diffusion model parameters.

    PubMed

    Nunez, Michael D; Vandekerckhove, Joachim; Srinivasan, Ramesh

    2017-02-01

    Perceptual decision making can be accounted for by drift-diffusion models, a class of decision-making models that assume a stochastic accumulation of evidence on each trial. Fitting response time and accuracy to a drift-diffusion model produces evidence accumulation rate and non-decision time parameter estimates that reflect cognitive processes. Our goal is to elucidate the effect of attention on visual decision making. In this study, we show that measures of attention obtained from simultaneous EEG recordings can explain per-trial evidence accumulation rates and perceptual preprocessing times during a visual decision making task. Models assuming linear relationships between diffusion model parameters and EEG measures as external inputs were fit in a single step in a hierarchical Bayesian framework. The EEG measures were features of the evoked potential (EP) to the onset of a masking noise and the onset of a task-relevant signal stimulus. Single-trial evoked EEG responses, P200s to the onsets of visual noise and N200s to the onsets of visual signal, explain single-trial evidence accumulation and preprocessing times. Within-trial evidence accumulation variance was not found to be influenced by attention to the signal or noise. Single-trial measures of attention lead to better out-of-sample predictions of accuracy and correct reaction time distributions for individual subjects.

  9. Patterns of physiological activity accompanying performance on a perceptual-motor task.

    DOT National Transportation Integrated Search

    1969-04-01

    Air traffic controllers are required to spend considerable periods of time observing radar displays. Yet, information regarding physiological measures which best reflect the attentional process in complex vigilance tasks is generally lacking. As an i...

  10. Human performance measuring device

    NASA Technical Reports Server (NTRS)

    Michael, J.; Scow, J.

    1970-01-01

    Complex coordinator, consisting of operator control console, recorder, subject display panel, and limb controls, measures human performance by testing perceptual and motor skills. Device measures psychophysiological functions in drug and environmental studies, and is applicable to early detection of psychophysiological body changes.

  11. Stable individual characteristics in the perception of multiple embedded patterns in multistable auditory stimuli

    PubMed Central

    Denham, Susan; Bõhm, Tamás M.; Bendixen, Alexandra; Szalárdy, Orsolya; Kocsis, Zsuzsanna; Mill, Robert; Winkler, István

    2014-01-01

    The ability of the auditory system to parse complex scenes into component objects in order to extract information from the environment is very robust, yet the processing principles underlying this ability are still not well understood. This study was designed to investigate the proposal that the auditory system constructs multiple interpretations of the acoustic scene in parallel, based on the finding that when listening to a long repetitive sequence listeners report switching between different perceptual organizations. Using the “ABA-” auditory streaming paradigm we trained listeners until they could reliably recognize all possible embedded patterns of length four which could in principle be extracted from the sequence, and in a series of test sessions investigated their spontaneous reports of those patterns. With the training allowing them to identify and mark a wider variety of possible patterns, participants spontaneously reported many more patterns than the ones traditionally assumed (Integrated vs. Segregated). Despite receiving consistent training and despite the apparent randomness of perceptual switching, we found individual switching patterns were idiosyncratic; i.e., the perceptual switching patterns of each participant were more similar to their own switching patterns in different sessions than to those of other participants. These individual differences were found to be preserved even between test sessions held a year after the initial experiment. Our results support the idea that the auditory system attempts to extract an exhaustive set of embedded patterns which can be used to generate expectations of future events and which by competing for dominance give rise to (changing) perceptual awareness, with the characteristics of pattern discovery and perceptual competition having a strong idiosyncratic component. Perceptual multistability thus provides a means for characterizing both general mechanisms and individual differences in human perception. PMID:24616656

  12. Stable individual characteristics in the perception of multiple embedded patterns in multistable auditory stimuli.

    PubMed

    Denham, Susan; Bõhm, Tamás M; Bendixen, Alexandra; Szalárdy, Orsolya; Kocsis, Zsuzsanna; Mill, Robert; Winkler, István

    2014-01-01

    The ability of the auditory system to parse complex scenes into component objects in order to extract information from the environment is very robust, yet the processing principles underlying this ability are still not well understood. This study was designed to investigate the proposal that the auditory system constructs multiple interpretations of the acoustic scene in parallel, based on the finding that when listening to a long repetitive sequence listeners report switching between different perceptual organizations. Using the "ABA-" auditory streaming paradigm we trained listeners until they could reliably recognize all possible embedded patterns of length four which could in principle be extracted from the sequence, and in a series of test sessions investigated their spontaneous reports of those patterns. With the training allowing them to identify and mark a wider variety of possible patterns, participants spontaneously reported many more patterns than the ones traditionally assumed (Integrated vs. Segregated). Despite receiving consistent training and despite the apparent randomness of perceptual switching, we found individual switching patterns were idiosyncratic; i.e., the perceptual switching patterns of each participant were more similar to their own switching patterns in different sessions than to those of other participants. These individual differences were found to be preserved even between test sessions held a year after the initial experiment. Our results support the idea that the auditory system attempts to extract an exhaustive set of embedded patterns which can be used to generate expectations of future events and which by competing for dominance give rise to (changing) perceptual awareness, with the characteristics of pattern discovery and perceptual competition having a strong idiosyncratic component. Perceptual multistability thus provides a means for characterizing both general mechanisms and individual differences in human perception.

  13. The effect of perceptual reasoning abilities on confrontation naming performance: An examination of three naming tests.

    PubMed

    Soble, Jason R; Marceaux, Janice C; Galindo, Juliette; Sordahl, Jeffrey A; Highsmith, Jonathan M; O'Rourke, Justin J F; González, David Andrés; Critchfield, Edan A; McCoy, Karin J M

    2016-01-01

    Confrontation naming tests are a common neuropsychological method of assessing language and a critical diagnostic tool in identifying certain neurodegenerative diseases; however, there is limited literature examining the visual-perceptual demands of these tasks. This study investigated the effect of perceptual reasoning abilities on three confrontation naming tests, the Boston Naming Test (BNT), Neuropsychological Assessment Battery (NAB) Naming Test, and Visual Naming Test (VNT) to elucidate the diverse cognitive functions underlying these tasks to assist with test selection procedures and increase diagnostic accuracy. A mixed clinical sample of 121 veterans were administered the BNT, NAB, VNT, and Wechsler Adult Intelligence Scale-4th Edition (WAIS-IV) Verbal Comprehension Index (VCI) and Perceptual Reasoning Index (PRI) as part of a comprehensive neuropsychological evaluation. Multiple regression indicated that PRI accounted for 23%, 13%, and 15% of the variance in BNT, VNT, and NAB scores, respectively, but dropped out as a significant predictor once VCI was added. Follow-up bootstrap mediation analyses revealed that PRI had a significant indirect effect on naming performance after controlling education, primary language, and severity of cognitive impairment, as well as the mediating effect of general verbal abilities for the BNT (B = 0.13; 95% confidence interval, CI [.07, .20]), VNT (B = 0.01; 95% CI [.002, .03]), and NAB (B = 0.03; 95% CI [.01, .06]). Findings revealed a complex relationship between perceptual reasoning abilities and confrontation naming that is mediated by general verbal abilities. However, when verbal abilities were statistically controlled, perceptual reasoning abilities were found to have a significant indirect effect on performance across all three confrontation naming measures with the largest effect noted with the BNT relative to the VNT and NAB Naming Test.

  14. Neural correlates of context-dependent feature conjunction learning in visual search tasks.

    PubMed

    Reavis, Eric A; Frank, Sebastian M; Greenlee, Mark W; Tse, Peter U

    2016-06-01

    Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom-up influence on subsequent stimulus processing, independent of task-demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom-up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom-up perceptual learning versus top-down, task-related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom-up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target-evoked activity relative to distractor-evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention-demanding task. This suggests that conjunction learning results in altered bottom-up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target-evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319-2330, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  15. Vibrotactile stimulation of fast-adapting cutaneous afferents from the foot modulates proprioception at the ankle joint

    PubMed Central

    Bent, Leah R.

    2016-01-01

    It has previously been shown that cutaneous sensory input from across a broad region of skin can influence proprioception at joints of the hand. The present experiment tested whether cutaneous input from different skin regions across the foot can influence proprioception at the ankle joint. The ability to passively match ankle joint position (17° and 7° plantar flexion and 7° dorsiflexion) was measured while cutaneous vibration was applied to the sole (heel, distal metatarsals) or dorsum of the target foot. Vibration was applied at two different frequencies to preferentially activate Meissner's corpuscles (45 Hz, 80 μm) or Pacinian corpuscles (255 Hz, 10 μm) at amplitudes ∼3 dB above mean perceptual thresholds. Results indicated that cutaneous input from all skin regions across the foot could influence joint-matching error and variability, although the strongest effects were observed with heel vibration. Furthermore, the influence of cutaneous input from each region was modulated by joint angle; in general, vibration had a limited effect on matching in dorsiflexion compared with matching in plantar flexion. Unlike previous results in the upper limb, we found no evidence that Pacinian input exerted a stronger influence on proprioception compared with Meissner input. Findings from this study suggest that fast-adapting cutaneous input from the foot modulates proprioception at the ankle joint in a passive joint-matching task. These results indicate that there is interplay between tactile and proprioceptive signals originating from the foot and ankle. PMID:26823342

  16. Data-Driven Haptic Modeling and Rendering of Viscoelastic and Frictional Responses of Deformable Objects.

    PubMed

    Yim, Sunghoon; Jeon, Seokhee; Choi, Seungmoon

    2016-01-01

    In this paper, we present an extended data-driven haptic rendering method capable of reproducing force responses during pushing and sliding interaction on a large surface area. The main part of the approach is a novel input variable set for the training of an interpolation model, which incorporates the position of a proxy - an imaginary contact point on the undeformed surface. This allows us to estimate friction in both sliding and sticking states in a unified framework. Estimating the proxy position is done in real-time based on simulation using a sliding yield surface - a surface defining a border between the sliding and sticking regions in the external force space. During modeling, the sliding yield surface is first identified via an automated palpation procedure. Then, through manual palpation on a target surface, input data and resultant force data are acquired. The data are used to build a radial basis interpolation model. During rendering, this input-output mapping interpolation model is used to estimate force responses in real-time in accordance with the interaction input. Physical performance evaluation demonstrates that our approach achieves reasonably high estimation accuracy. A user study also shows plausible perceptual realism under diverse and extensive exploration.

  17. High-power graphic computers for visual simulation: a real-time--rendering revolution

    NASA Technical Reports Server (NTRS)

    Kaiser, M. K.

    1996-01-01

    Advances in high-end graphics computers in the past decade have made it possible to render visual scenes of incredible complexity and realism in real time. These new capabilities make it possible to manipulate and investigate the interactions of observers with their visual world in ways once only dreamed of. This paper reviews how these developments have affected two preexisting domains of behavioral research (flight simulation and motion perception) and have created a new domain (virtual environment research) which provides tools and challenges for the perceptual psychologist. Finally, the current limitations of these technologies are considered, with an eye toward how perceptual psychologist might shape future developments.

  18. Perceptual and Acoustic Analyses of Good Voice Quality in Male Radio Performers.

    PubMed

    Warhurst, Samantha; Madill, Catherine; McCabe, Patricia; Ternström, Sten; Yiu, Edwin; Heard, Robert

    2017-03-01

    Good voice quality is an asset to professional voice users, including radio performers. We examined whether (1) voices could be reliably categorized as good for the radio and (2) these categories could be predicted using acoustic measures. Male radio performers (n = 24) and age-matched male controls performed "The Rainbow Passage" as if presenting on the radio. Voice samples were rated using a three-stage paired-comparison paradigm by 51 naive listeners and perceptual categories were identified (Study 1), and then analyzed for fundamental frequency, long-term average spectrum, cepstral peak prominence, and pause or spoken-phrase duration (Study 2). Study 1: Good inter-judge reliability was found for perceptual judgments of the best 15 voices (good for radio category, 14/15 = radio performers), but agreement on the remaining 33 voices (unranked category) was poor. Study 2: Discriminant function analyses showed that the SD standard deviation of sounded portion duration, equivalent sound level, and smoothed cepstral peak prominence predicted membership of categories with moderate accuracy (R 2  = 0.328). Radio performers are heterogeneous for voice quality; good voice quality was judged reliably in only 14 out of 24 radio performers. Current acoustic analyses detected some of the relevant signal properties that were salient in these judgments. More refined perceptual analysis and the use of other perceptual methods might provide more information on the complex nature of judging good voices. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  19. Socio-cultural Input Facilitates Children’s Developing Understanding of Extraordinary Minds

    PubMed Central

    Lane, Jonathan D.; Wellman, Henry M.; Evans, E. Margaret

    2012-01-01

    Three- to 5-year-old (N=61) religiously-schooled preschoolers received theory-of-mind tasks about the mental states of ordinary humans and agents with exceptional perceptual or mental capacities. Consistent with an anthropomorphism hypothesis, children beginning to appreciate limitations of human minds (e.g., ignorance) attributed those limits to God. Only 5-year-olds differentiated between humans’ fallible minds and God’s less fallible mind. Unlike secularly-schooled children, religiously-schooled 4-year-olds did appreciate another agent’s less fallible mental abilities when instructed and reminded about those abilities. Among children who understood ordinary humans’ mental fallibilities, knowledge of God predicted attributions of correct epistemic states to extraordinary agents. Results suggest that, at a certain point in theory-of-mind development, socio-cultural input can facilitate an appreciation for extraordinary minds. PMID:22372590

  20. Virtual reality: past, present and future.

    PubMed

    Gobbetti, E; Scateni, R

    1998-01-01

    This report provides a short survey of the field of virtual reality, highlighting application domains, technological requirements, and currently available solutions. The report is organized as follows: section 1 presents the background and motivation of virtual environment research and identifies typical application domain, section 2 discusses the characteristics a virtual reality system must have in order to exploit the perceptual and spatial skills of users, section 3 surveys current input/output devices for virtual reality, section 4 surveys current software approaches to support the creation of virtual reality systems, and section 5 summarizes the report.

  1. Selective entrainment of brain oscillations drives auditory perceptual organization.

    PubMed

    Costa-Faidella, Jordi; Sussman, Elyse S; Escera, Carles

    2017-10-01

    Perceptual sound organization supports our ability to make sense of the complex acoustic environment, to understand speech and to enjoy music. However, the neuronal mechanisms underlying the subjective experience of perceiving univocal auditory patterns that can be listened to, despite hearing all sounds in a scene, are poorly understood. We hereby investigated the manner in which competing sound organizations are simultaneously represented by specific brain activity patterns and the way attention and task demands prime the internal model generating the current percept. Using a selective attention task on ambiguous auditory stimulation coupled with EEG recordings, we found that the phase of low-frequency oscillatory activity dynamically tracks multiple sound organizations concurrently. However, whereas the representation of ignored sound patterns is circumscribed to auditory regions, large-scale oscillatory entrainment in auditory, sensory-motor and executive-control network areas reflects the active perceptual organization, thereby giving rise to the subjective experience of a unitary percept. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. A new perspective on the perceptual selectivity of attention under load.

    PubMed

    Giesbrecht, Barry; Sy, Jocelyn; Bundesen, Claus; Kyllingsbaek, Søren

    2014-05-01

    The human attention system helps us cope with a complex environment by supporting the selective processing of information relevant to our current goals. Understanding the perceptual, cognitive, and neural mechanisms that mediate selective attention is a core issue in cognitive neuroscience. One prominent model of selective attention, known as load theory, offers an account of how task demands determine when information is selected and an account of the efficiency of the selection process. However, load theory has several critical weaknesses that suggest that it is time for a new perspective. Here we review the strengths and weaknesses of load theory and offer an alternative biologically plausible computational account that is based on the neural theory of visual attention. We argue that this new perspective provides a detailed computational account of how bottom-up and top-down information is integrated to provide efficient attentional selection and allocation of perceptual processing resources. © 2014 New York Academy of Sciences.

  3. Learning what to expect (in visual perception)

    PubMed Central

    Seriès, Peggy; Seitz, Aaron R.

    2013-01-01

    Expectations are known to greatly affect our experience of the world. A growing theory in computational neuroscience is that perception can be successfully described using Bayesian inference models and that the brain is “Bayes-optimal” under some constraints. In this context, expectations are particularly interesting, because they can be viewed as prior beliefs in the statistical inference process. A number of questions remain unsolved, however, for example: How fast do priors change over time? Are there limits in the complexity of the priors that can be learned? How do an individual’s priors compare to the true scene statistics? Can we unlearn priors that are thought to correspond to natural scene statistics? Where and what are the neural substrate of priors? Focusing on the perception of visual motion, we here review recent studies from our laboratories and others addressing these issues. We discuss how these data on motion perception fit within the broader literature on perceptual Bayesian priors, perceptual expectations, and statistical and perceptual learning and review the possible neural basis of priors. PMID:24187536

  4. Modality Switching in a Property Verification Task: An ERP Study of What Happens When Candles Flicker after High Heels Click

    PubMed Central

    Collins, Jennifer; Pecher, Diane; Zeelenberg, René; Coulson, Seana

    2011-01-01

    The perceptual modalities associated with property words, such as flicker or click, have previously been demonstrated to affect subsequent property verification judgments (Pecher et al., 2003). Known as the conceptual modality switch effect, this finding supports the claim that brain systems for perception and action help subserve the representation of concepts. The present study addressed the cognitive and neural substrate of this effect by recording event-related potentials (ERPs) as participants performed a property verification task with visual or auditory properties in key trials. We found that for visual property verifications, modality switching was associated with an increased amplitude N400. For auditory verifications, switching led to a larger late positive complex. Observed ERP effects of modality switching suggest property words access perceptual brain systems. Moreover, the timing and pattern of the effects suggest perceptual systems impact the decision-making stage in the verification of auditory properties, and the semantic stage in the verification of visual properties. PMID:21713128

  5. Modality Switching in a Property Verification Task: An ERP Study of What Happens When Candles Flicker after High Heels Click.

    PubMed

    Collins, Jennifer; Pecher, Diane; Zeelenberg, René; Coulson, Seana

    2011-01-01

    The perceptual modalities associated with property words, such as flicker or click, have previously been demonstrated to affect subsequent property verification judgments (Pecher et al., 2003). Known as the conceptual modality switch effect, this finding supports the claim that brain systems for perception and action help subserve the representation of concepts. The present study addressed the cognitive and neural substrate of this effect by recording event-related potentials (ERPs) as participants performed a property verification task with visual or auditory properties in key trials. We found that for visual property verifications, modality switching was associated with an increased amplitude N400. For auditory verifications, switching led to a larger late positive complex. Observed ERP effects of modality switching suggest property words access perceptual brain systems. Moreover, the timing and pattern of the effects suggest perceptual systems impact the decision-making stage in the verification of auditory properties, and the semantic stage in the verification of visual properties.

  6. The Folded Paper Size Illusion: Evidence of Inability to Perceptually Integrate More Than One Geometrical Dimension

    PubMed Central

    2016-01-01

    The folded paper-size illusion is as easy to demonstrate as it is powerful in generating insights into perceptual processing: First take two A4 sheets of paper, one original sized, another halved by folding, then compare them in terms of area size by centering the halved sheet on the center of the original one! We perceive the larger sheet as far less than double (i.e., 100%) the size of the small one, typically only being about two thirds larger—this illusion is preserved by rotating the inner sheet and even by aligning it to one or two sides, but is dissolved by aligning both sheets to three sides, here documented by 88 participants’ data. A potential explanation might be the general incapability of accurately comparing more than one geometrical dimension at once—in everyday life, we solve this perceptual-cognitive bottleneck by reducing the complexity of such a task via aligning parts with same lengths. PMID:27698977

  7. The Processing of Biologically Plausible and Implausible forms in American Sign Language: Evidence for Perceptual Tuning.

    PubMed

    Almeida, Diogo; Poeppel, David; Corina, David

    The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language (ASL) signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate. These neural data are accompanied with a significantly increased perceptual sensitivity to the anatomical incongruity. The differential effects in the early visual evoked potentials arguably reflect an expectation-driven assessment of somatic representational integrity, suggesting that language experience and/or auditory deprivation may shape the neuronal mechanisms underlying the analysis of complex human form. The data demonstrate that the perceptual tuning that underlies the discrimination of language and non-language information is not limited to spoken languages but extends to languages expressed in the visual modality.

  8. Perceptually-oriented hypnosis: removing a socially learned pathology and developing adequacy: the case of invisible girl.

    PubMed

    Woodard, Fredrick James

    2014-10-01

    This is the first case review to explicate perceptual hypnotic principles such as differentiation, characteristics of an adequate personality, and the need for adequacy, as utilized in clinical hypnosis in a complex case that altered the distorted perceptions and personal meanings of an eleven-year-old girl who believed that she had Bipolar Disorder and her body and mind were damaged. This qualitative case study examines aspects of hypnosis during therapy from a perceptual point of view to illustrate frustrations in difficult cases and identify some of the causes and origins of alleged clinical pathology in adverse environments. Some moments of effective self-healing through supporting internally controlled changes in perception during hypnotic experiencing are highlighted rather than externally focusing on observed thoughts and behavior. Factors relevant to social psychological research, such as family dynamics, poverty, and interactions with social service agencies and institutions, creating learned pathology, are pointed out for future research.

  9. Perceptual organization of speech signals by children with and without dyslexia

    PubMed Central

    Nittrouer, Susan; Lowenstein, Joanna H.

    2013-01-01

    Developmental dyslexia is a condition in which children encounter difficulty learning to read in spite of adequate instruction. Although considerable effort has been expended trying to identify the source of the problem, no single solution has been agreed upon. The current study explored a new hypothesis, that developmental dyslexia may be due to faulty perceptual organization of linguistically relevant sensory input. To test that idea, sentence-length speech signals were processed to create either sine-wave or noise-vocoded analogs. Seventy children between 8 and 11 years of age, with and without dyslexia participated. Children with dyslexia were selected to have phonological awareness deficits, although those without such deficits were retained in the study. The processed sentences were presented for recognition, and measures of reading, phonological awareness, and expressive vocabulary were collected. Results showed that children with dyslexia, regardless of phonological subtype, had poorer recognition scores than children without dyslexia for both kinds of degraded sentences. Older children with dyslexia recognized the sine-wave sentences better than younger children with dyslexia, but no such effect of age was found for the vocoded materials. Recognition scores were used as predictor variables in regression analyses with reading, phonological awareness, and vocabulary measures used as dependent variables. Scores for both sorts of sentence materials were strong predictors of performance on all three dependent measures when all children were included, but only performance for the sine-wave materials explained significant proportions of variance when only children with dyslexia were included. Finally, matching young, typical readers with older children with dyslexia on reading abilities did not mitigate the group difference in recognition of vocoded sentences. Conclusions were that children with dyslexia have difficulty organizing linguistically relevant sensory input, but learn to do so for the structure preserved by sine-wave signals before they do so for other sorts of signal structure. These perceptual organization deficits could account for difficulties acquiring refined linguistic representations, including those of a phonological nature, although ramifications are different across affected children. PMID:23702597

  10. Perceptual organization of speech signals by children with and without dyslexia.

    PubMed

    Nittrouer, Susan; Lowenstein, Joanna H

    2013-08-01

    Developmental dyslexia is a condition in which children encounter difficulty learning to read in spite of adequate instruction. Although considerable effort has been expended trying to identify the source of the problem, no single solution has been agreed upon. The current study explored a new hypothesis, that developmental dyslexia may be due to faulty perceptual organization of linguistically relevant sensory input. To test that idea, sentence-length speech signals were processed to create either sine-wave or noise-vocoded analogs. Seventy children between 8 and 11 years of age, with and without dyslexia participated. Children with dyslexia were selected to have phonological awareness deficits, although those without such deficits were retained in the study. The processed sentences were presented for recognition, and measures of reading, phonological awareness, and expressive vocabulary were collected. Results showed that children with dyslexia, regardless of phonological subtype, had poorer recognition scores than children without dyslexia for both kinds of degraded sentences. Older children with dyslexia recognized the sine-wave sentences better than younger children with dyslexia, but no such effect of age was found for the vocoded materials. Recognition scores were used as predictor variables in regression analyses with reading, phonological awareness, and vocabulary measures used as dependent variables. Scores for both sorts of sentence materials were strong predictors of performance on all three dependent measures when all children were included, but only performance for the sine-wave materials explained significant proportions of variance when only children with dyslexia were included. Finally, matching young, typical readers with older children with dyslexia on reading abilities did not mitigate the group difference in recognition of vocoded sentences. Conclusions were that children with dyslexia have difficulty organizing linguistically relevant sensory input, but learn to do so for the structure preserved by sine-wave signals before they do so for other sorts of signal structure. These perceptual organization deficits could account for difficulties acquiring refined linguistic representations, including those of a phonological nature, although ramifications are different across affected children. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Cholinergic, But Not Dopaminergic or Noradrenergic, Enhancement Sharpens Visual Spatial Perception in Humans

    PubMed Central

    Wallace, Deanna L.

    2017-01-01

    The neuromodulator acetylcholine modulates spatial integration in visual cortex by altering the balance of inputs that generate neuronal receptive fields. These cholinergic effects may provide a neurobiological mechanism underlying the modulation of visual representations by visual spatial attention. However, the consequences of cholinergic enhancement on visuospatial perception in humans are unknown. We conducted two experiments to test whether enhancing cholinergic signaling selectively alters perceptual measures of visuospatial interactions in human subjects. In Experiment 1, a double-blind placebo-controlled pharmacology study, we measured how flanking distractors influenced detection of a small contrast decrement of a peripheral target, as a function of target-flanker distance. We found that cholinergic enhancement with the cholinesterase inhibitor donepezil improved target detection, and modeling suggested that this was mainly due to a narrowing of the extent of facilitatory perceptual spatial interactions. In Experiment 2, we tested whether these effects were selective to the cholinergic system or would also be observed following enhancements of related neuromodulators dopamine or norepinephrine. Unlike cholinergic enhancement, dopamine (bromocriptine) and norepinephrine (guanfacine) manipulations did not improve performance or systematically alter the spatial profile of perceptual interactions between targets and distractors. These findings reveal mechanisms by which cholinergic signaling influences visual spatial interactions in perception and improves processing of a visual target among distractors, effects that are notably similar to those of spatial selective attention. SIGNIFICANCE STATEMENT Acetylcholine influences how visual cortical neurons integrate signals across space, perhaps providing a neurobiological mechanism for the effects of visual selective attention. However, the influence of cholinergic enhancement on visuospatial perception remains unknown. Here we demonstrate that cholinergic enhancement improves detection of a target flanked by distractors, consistent with sharpened visuospatial perceptual representations. Furthermore, whereas most pharmacological studies focus on a single neurotransmitter, many neuromodulators can have related effects on cognition and perception. Thus, we also demonstrate that enhancing noradrenergic and dopaminergic systems does not systematically improve visuospatial perception or alter its tuning. Our results link visuospatial tuning effects of acetylcholine at the neuronal and perceptual levels and provide insights into the connection between cholinergic signaling and visual attention. PMID:28336568

  12. Perceptual Plasticity for Auditory Object Recognition

    PubMed Central

    Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.

    2017-01-01

    In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples of perceptual categories that are thought to be highly stable. This framework suggests that the process of auditory recognition cannot be divorced from the short-term context in which an auditory object is presented. Implications for auditory category acquisition and extant models of auditory perception, both cognitive and neural, are discussed. PMID:28588524

  13. Vernier perceptual learning transfers to completely untrained retinal locations after double training: A “piggybacking” effect

    PubMed Central

    Wang, Rui; Zhang, Jun-Yun; Klein, Stanley A.; Levi, Dennis M.; Yu, Cong

    2014-01-01

    Perceptual learning, a process in which training improves visual discrimination, is often specific to the trained retinal location, and this location specificity is frequently regarded as an indication of neural plasticity in the retinotopic visual cortex. However, our previous studies have shown that “double training” enables location-specific perceptual learning, such as Vernier learning, to completely transfer to a new location where an irrelevant task is practiced. Here we show that Vernier learning can be actuated by less location-specific orientation or motion-direction learning to transfer to completely untrained retinal locations. This “piggybacking” effect occurs even if both tasks are trained at the same retinal location. However, piggybacking does not occur when the Vernier task is paired with a more location-specific contrast-discrimination task. This previously unknown complexity challenges the current understanding of perceptual learning and its specificity/transfer. Orientation and motion-direction learning, but not contrast and Vernier learning, appears to activate a global process that allows learning transfer to untrained locations. Moreover, when paired with orientation or motion-direction learning, Vernier learning may be “piggybacked” by the activated global process to transfer to other untrained retinal locations. How this task-specific global activation process is achieved is as yet unknown. PMID:25398974

  14. Automatic detection of articulation disorders in children with cleft lip and palate.

    PubMed

    Maier, Andreas; Hönig, Florian; Bocklet, Tobias; Nöth, Elmar; Stelzle, Florian; Nkenke, Emeka; Schuster, Maria

    2009-11-01

    Speech of children with cleft lip and palate (CLP) is sometimes still disordered even after adequate surgical and nonsurgical therapies. Such speech shows complex articulation disorders, which are usually assessed perceptually, consuming time and manpower. Hence, there is a need for an easy to apply and reliable automatic method. To create a reference for an automatic system, speech data of 58 children with CLP were assessed perceptually by experienced speech therapists for characteristic phonetic disorders at the phoneme level. The first part of the article aims to detect such characteristics by a semiautomatic procedure and the second to evaluate a fully automatic, thus simple, procedure. The methods are based on a combination of speech processing algorithms. The semiautomatic method achieves moderate to good agreement (kappa approximately 0.6) for the detection of all phonetic disorders. On a speaker level, significant correlations between the perceptual evaluation and the automatic system of 0.89 are obtained. The fully automatic system yields a correlation on the speaker level of 0.81 to the perceptual evaluation. This correlation is in the range of the inter-rater correlation of the listeners. The automatic speech evaluation is able to detect phonetic disorders at an experts'level without any additional human postprocessing.

  15. Perceptual Bias and Loudness Change: An Investigation of Memory, Masking, and Psychophysiology

    NASA Astrophysics Data System (ADS)

    Olsen, Kirk N.

    Loudness is a fundamental aspect of human auditory perception that is closely associated with a sound's physical acoustic intensity. The dynamic quality of intensity change is an inherent acoustic feature in real-world listening domains such as speech and music. However, perception of loudness change in response to continuous intensity increases (up-ramps) and decreases (down-ramps) has received relatively little empirical investigation. Overestimation of loudness change in response to up-ramps is said to be linked to an adaptive survival response associated with looming (or approaching) motion in the environment. The hypothesised 'perceptual bias' to looming auditory motion suggests why perceptual overestimation of up-ramps may occur; however it does not offer a causal explanation. It is concluded that post-stimulus judgements of perceived loudness change are significantly affected by a cognitive recency response bias that, until now, has been an artefact of experimental procedure. Perceptual end-level differences caused by duration specific sensory adaptation at peripheral and/or central stages of auditory processing may explain differences in post-stimulus judgements of loudness change. Experiments that investigate human responses to acoustic intensity dynamics, encompassing topics from basic auditory psychophysics (e.g., sensory adaptation) to cognitive-emotional appraisal of increasingly complex stimulus events such as music and auditory warnings, are proposed for future research.

  16. Using game theory for perceptual tuned rate control algorithm in video coding

    NASA Astrophysics Data System (ADS)

    Luo, Jiancong; Ahmad, Ishfaq

    2005-03-01

    This paper proposes a game theoretical rate control technique for video compression. Using a cooperative gaming approach, which has been utilized in several branches of natural and social sciences because of its enormous potential for solving constrained optimization problems, we propose a dual-level scheme to optimize the perceptual quality while guaranteeing "fairness" in bit allocation among macroblocks. At the frame level, the algorithm allocates target bits to frames based on their coding complexity. At the macroblock level, the algorithm distributes bits to macroblocks by defining a bargaining game. Macroblocks play cooperatively to compete for shares of resources (bits) to optimize their quantization scales while considering the Human Visual System"s perceptual property. Since the whole frame is an entity perceived by viewers, macroblocks compete cooperatively under a global objective of achieving the best quality with the given bit constraint. The major advantage of the proposed approach is that the cooperative game leads to an optimal and fair bit allocation strategy based on the Nash Bargaining Solution. Another advantage is that it allows multi-objective optimization with multiple decision makers (macroblocks). The simulation results testify the algorithm"s ability to achieve accurate bit rate with good perceptual quality, and to maintain a stable buffer level.

  17. Examining Chemistry Students Visual-Perceptual Skills Using the VSCS tool and Interview Data

    NASA Astrophysics Data System (ADS)

    Christian, Caroline

    The Visual-Spatial Chemistry Specific (VSCS) assessment tool was developed to test students' visual-perceptual skills, which are required to form a mental image of an object. The VSCS was designed around the theoretical framework of Rochford and Archer that provides eight distinct and well-defined visual-perceptual skills with identified problems students might have with each skill set. Factor analysis was used to analyze the results during the validation process of the VSCS. Results showed that the eight factors could not be separated from each other, but instead two factors emerged as significant to the data. These two factors have been defined and described as a general visual-perceptual skill (factor 1) and a skill that adds on a second level of complexity by involving multiple viewpoints such as changing frames of reference. The questions included in the factor analysis were bolstered by the addition of an item response theory (IRT) analysis. Interviews were also conducted with twenty novice students to test face validity of the tool, and to document student approaches at solving visualization problems of this type. Students used five main physical resources or processes to solve the questions, but the resource that was the most successful was handling or building a physical representation of an object.

  18. Enhanced and diminished visuo-spatial information processing in autism depends on stimulus complexity.

    PubMed

    Bertone, Armando; Mottron, Laurent; Jelenic, Patricia; Faubert, Jocelyn

    2005-10-01

    Visuo-perceptual processing in autism is characterized by intact or enhanced performance on static spatial tasks and inferior performance on dynamic tasks, suggesting a deficit of dorsal visual stream processing in autism. However, previous findings by Bertone et al. indicate that neuro-integrative mechanisms used to detect complex motion, rather than motion perception per se, may be impaired in autism. We present here the first demonstration of concurrent enhanced and decreased performance in autism on the same visuo-spatial static task, wherein the only factor dichotomizing performance was the neural complexity required to discriminate grating orientation. The ability of persons with autism was found to be superior for identifying the orientation of simple, luminance-defined (or first-order) gratings but inferior for complex, texture-defined (or second-order) gratings. Using a flicker contrast sensitivity task, we demonstrated that this finding is probably not due to abnormal information processing at a sub-cortical level (magnocellular and parvocellular functioning). Together, these findings are interpreted as a clear indication of altered low-level perceptual information processing in autism, and confirm that the deficits and assets observed in autistic visual perception are contingent on the complexity of the neural network required to process a given type of visual stimulus. We suggest that atypical neural connectivity, resulting in enhanced lateral inhibition, may account for both enhanced and decreased low-level information processing in autism.

  19. Costs of storing colour and complex shape in visual working memory: Insights from pupil size and slow waves.

    PubMed

    Kursawe, Michael A; Zimmer, Hubert D

    2015-06-01

    We investigated the impact of perceptual processing demands on visual working memory of coloured complex random polygons during change detection. Processing load was assessed by pupil size (Exp. 1) and additionally slow wave potentials (Exp. 2). Task difficulty was manipulated by presenting different set sizes (1, 2, 4 items) and by making different features (colour, shape, or both) task-relevant. Memory performance in the colour condition was better than in the shape and both condition which did not differ. Pupil dilation and the posterior N1 increased with set size independent of type of feature. In contrast, slow waves and a posterior P2 component showed set size effects but only if shape was task-relevant. In the colour condition slow waves did not vary with set size. We suggest that pupil size and N1 indicates different states of attentional effort corresponding to the number of presented items. In contrast, slow waves reflect processes related to encoding and maintenance strategies. The observation that their potentials vary with the type of feature (simple colour versus complex shape) indicates that perceptual complexity already influences encoding and storage and not only comparison of targets with memory entries at the moment of testing. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Representation of Perceptual Color Space in Macaque Posterior Inferior Temporal Cortex (the V4 Complex)

    PubMed Central

    Bohon, Kaitlin S.; Hermann, Katherine L.; Hansen, Thorsten

    2016-01-01

    Abstract The lateral geniculate nucleus is thought to represent color using two populations of cone-opponent neurons [L vs M; S vs (L + M)], which establish the cardinal directions in color space (reddish vs cyan; lavender vs lime). How is this representation transformed to bring about color perception? Prior work implicates populations of glob cells in posterior inferior temporal cortex (PIT; the V4 complex), but the correspondence between the neural representation of color in PIT/V4 complex and the organization of perceptual color space is unclear. We compared color-tuning data for populations of glob cells and interglob cells to predictions obtained using models that varied in the color-tuning narrowness of the cells, and the color preference distribution across the populations. Glob cells were best accounted for by simulated neurons that have nonlinear (narrow) tuning and, as a population, represent a color space designed to be perceptually uniform (CIELUV). Multidimensional scaling and representational similarity analyses showed that the color space representations in both glob and interglob populations were correlated with the organization of CIELUV space, but glob cells showed a stronger correlation. Hue could be classified invariant to luminance with high accuracy given glob responses and above-chance accuracy given interglob responses. Luminance could be read out invariant to changes in hue in both populations, but interglob cells tended to prefer stimuli having luminance contrast, regardless of hue, whereas glob cells typically retained hue tuning as luminance contrast was modulated. The combined luminance/hue sensitivity of glob cells is predicted for neurons that can distinguish two colors of the same hue at different luminance levels (orange/brown). PMID:27595132

  1. Learning to see, seeing to learn: visual aspects of sensemaking

    NASA Astrophysics Data System (ADS)

    Russell, Daniel M.

    2003-06-01

    When one says "I see," what is usually meant is "I understand." But what does it mean to create a sense of understanding a large, complex, problem, one with many interlocking pieces, sometimes ill-fitting data and the occasional bit of contradictory information? The traditional computer science perspective on helping people towards understanding is to provide an armamentarium of tools and techniques - databases, query tools and a variety of graphing methods. As a field, we have an overly simple perspective on what it means to grapple with real information. In practice, people who try to make sense of some thing (say, the life sciences, the Middle East, the large scale structure of the universe, their taxes) are faced with a complex collection of information, some in easy-to-digest structured forms, but with many relevant parts scattered hither and yon, in forms and shapes too difficult to manage. To create an understanding, we find that people create representations of complex information. Yet using representations relies on fairly sophisticated perceptual practices. These practices are in no way preordained, but subject to the kinds of perceptual and cognitive phenomena we see in every day life. In order to understand our information environments, we need to learn to perceive these perceptual elements, and understand when they do, and do not, work to our advantage. A more powerful approach to the problem of supporting realistic sensemaking practice is to design information environments that accommodate both the world"s information realities and people"s cognitive characteristics. This paper argues that visual aspects of representation use often dominate sensemaking behavior, and illustrates this by showing three sensemaking tools we have built that take advantage of this property.

  2. Using Apex To Construct CPM-GOMS Models

    NASA Technical Reports Server (NTRS)

    John, Bonnie; Vera, Alonso; Matessa, Michael; Freed, Michael; Remington, Roger

    2006-01-01

    process for automatically generating computational models of human/computer interactions as well as graphical and textual representations of the models has been built on the conceptual foundation of a method known in the art as CPM-GOMS. This method is so named because it combines (1) the task decomposition of analysis according to an underlying method known in the art as the goals, operators, methods, and selection (GOMS) method with (2) a model of human resource usage at the level of cognitive, perceptual, and motor (CPM) operations. CPM-GOMS models have made accurate predictions about behaviors of skilled computer users in routine tasks, but heretofore, such models have been generated in a tedious, error-prone manual process. In the present process, CPM-GOMS models are generated automatically from a hierarchical task decomposition expressed by use of a computer program, known as Apex, designed previously to be used to model human behavior in complex, dynamic tasks. An inherent capability of Apex for scheduling of resources automates the difficult task of interleaving the cognitive, perceptual, and motor resources that underlie common task operators (e.g., move and click mouse). The user interface of Apex automatically generates Program Evaluation Review Technique (PERT) charts, which enable modelers to visualize the complex parallel behavior represented by a model. Because interleaving and the generation of displays to aid visualization are automated, it is now feasible to construct arbitrarily long sequences of behaviors. The process was tested by using Apex to create a CPM-GOMS model of a relatively simple human/computer-interaction task and comparing the time predictions of the model and measurements of the times taken by human users in performing the various steps of the task. The task was to withdraw $80 in cash from an automated teller machine (ATM). For the test, a Visual Basic mockup of an ATM was created, with a provision for input from (and measurement of the performance of) the user via a mouse. The times predicted by the automatically generated model turned out to approximate the measured times fairly well (see figure). While these results are promising, there is need for further development of the process. Moreover, it will also be necessary to test other, more complex models: The actions required of the user in the ATM task are too sequential to involve substantial parallelism and interleaving and, hence, do not serve as an adequate test of the unique strength of CPM-GOMS models to accommodate parallelism and interleaving.

  3. The Comparison of Visual Working Memory Representations with Perceptual Inputs

    PubMed Central

    Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew

    2008-01-01

    The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. This study tests the hypothesis that differences between the memory of a stimulus array and the perception of a new array are detected in a manner that is analogous to the detection of simple features in visual search tasks. That is, just as the presence of a task-relevant feature in visual search can be detected in parallel, triggering a rapid shift of attention to the object containing the feature, the presence of a memory-percept difference along a task-relevant dimension can be detected in parallel, triggering a rapid shift of attention to the changed object. Supporting evidence was obtained in a series of experiments that examined manual reaction times, saccadic reaction times, and event-related potential latencies. However, these experiments also demonstrated that a slow, limited-capacity process must occur before the observer can make a manual change-detection response. PMID:19653755

  4. Impaired integration of object knowledge and visual input in a case of ventral simultanagnosia with bilateral damage to area V4.

    PubMed

    Leek, E Charles; d'Avossa, Giovanni; Tainturier, Marie-Josèphe; Roberts, Daniel J; Yuen, Sung Lai; Hu, Mo; Rafal, Robert

    2012-01-01

    This study examines how brain damage can affect the cognitive processes that support the integration of sensory input and prior knowledge during shape perception. It is based on the first detailed study of acquired ventral simultanagnosia, which was found in a patient (M.T.) with posterior occipitotemporal lesions encompassing V4 bilaterally. Despite showing normal object recognition for single items in both accuracy and response times (RTs), and intact low-level vision assessed across an extensive battery of tests, M.T. was impaired in object identification with overlapping figures displays. Task performance was modulated by familiarity: Unlike controls, M.T. was faster with overlapping displays of abstract shapes than with overlapping displays of common objects. His performance with overlapping common object displays was also influenced by both the semantic relatedness and visual similarity of the display items. These findings challenge claims that visual perception is driven solely by feedforward mechanisms and show how brain damage can selectively impair high-level perceptual processes supporting the integration of stored knowledge and visual sensory input.

  5. Software-hardware complex for the input of telemetric information obtained from rocket studies of the radiation of the earth's upper atmosphere

    NASA Astrophysics Data System (ADS)

    Bazdrov, I. I.; Bortkevich, V. S.; Khokhlov, V. N.

    2004-10-01

    This paper describes a software-hardware complex for the input into a personal computer of telemetric information obtained by means of telemetry stations TRAL KR28, RTS-8, and TRAL K2N. Structural and functional diagrams are given of the input device and the hardware complex. Results that characterize the features of the input process and selective data of optical measurements of atmospheric radiation are given. © 2004

  6. Handwriting Fluency and Visuospatial Generativity at Primary School

    ERIC Educational Resources Information Center

    Stievano, Paolo; Michetti, Silvia; McClintock, Shawn M.; Levi, Gabriel; Scalisi, Teresa Gloria

    2016-01-01

    Handwriting is a complex activity that involves continuous interaction between lowerlevel perceptual-motor and higher-level cognitive processes. All handwriting models describe involvement of executive functions (EF) in handwriting development. Particular EF domains associated with handwriting include maintenance of information in working memory,…

  7. Neurocognitive Dimensions of Lexical Complexity in Polish

    ERIC Educational Resources Information Center

    Szlachta, Zanna; Bozic, Mirjana; Jelowicka, Aleksandra; Marslen-Wilson, William D.

    2012-01-01

    Neuroimaging studies of English suggest that speech comprehension engages two interdependent systems: a bilateral fronto-temporal network responsible for general perceptual and cognitive processing, and a specialised left-lateralised network supporting specifically linguistic processing. Using fMRI we test this hypothesis in Polish, a Slavic…

  8. Visual Screening: A Procedure.

    ERIC Educational Resources Information Center

    Williams, Robert T.

    Vision is a complex process involving three phases: physical (acuity), physiological (integrative), and psychological (perceptual). Although these phases cannot be considered discrete, they provide the basis for the visual screening procedure used by the Reading Services of Colorado State University and described in this document. Ten tests are…

  9. Role of perceptual and organizational factors in amnesics' recall of the Rey-Osterrieth complex figure: a comparison of three amnesic groups.

    PubMed

    Kixmiller, J S; Verfaellie, M M; Mather, M M; Cermak, L S

    2000-04-01

    To examine the contribution of visual-perceptual and visual-organizational factors to visual memory in amnesia, Korsakoff, medial temporal, and anterior communicating artery (ACoA) aneurysm amnesics' copy, organization, and recall performance on the Rey-Osterrieth Complex Figure was assessed. Korsakoff patients were matched to medial temporal patients in terms of severity of amnesia, while the ACoA group, which was less severely amnesic, was matched to the Korsakoff patients on performance on executive tasks. Results indicated that while both the ACoA and Korsakoff groups had poorer copy accuracy and organization than controls, only the Korsakoff patients' copy accuracy was worse than the other two amnesic groups. While the Korsakoff patient's visuoperceptual deficits could partially explain this group's poor performance at immediate recall, the Korsakoff group's comparatively worse performance at delayed recall could not be accounted for by poor copy accuracy, reduced visual organization, or even the combined influence of these two factors.

  10. Dynamics of fingertip contact during the onset of tangential slip

    PubMed Central

    Delhaye, Benoit; Lefèvre, Philippe; Thonnard, Jean-Louis

    2014-01-01

    Through highly precise perceptual and sensorimotor activities, the human tactile system continuously acquires information about the environment. Mechanical interactions between the skin at the point of contact and a touched surface serve as the source of this tactile information. Using a dedicated custom robotic platform, we imaged skin deformation at the contact area between the finger and a flat surface during the onset of tangential sliding movements in four different directions (proximal, distal, radial and ulnar) and with varying normal force and tangential speeds. This simple tactile event evidenced complex mechanics. We observed a reduction of the contact area while increasing the tangential force and proposed to explain this phenomenon by nonlinear stiffening of the skin. The deformation's shape and amplitude were highly dependent on stimulation direction. We conclude that the complex, but highly patterned and reproducible, deformations measured in this study are a potential source of information for the central nervous system and that further mechanical measurement are needed to better understand tactile perceptual and motor performances. PMID:25253033

  11. Vibrotactile stimulation of fast-adapting cutaneous afferents from the foot modulates proprioception at the ankle joint.

    PubMed

    Mildren, Robyn L; Bent, Leah R

    2016-04-15

    It has previously been shown that cutaneous sensory input from across a broad region of skin can influence proprioception at joints of the hand. The present experiment tested whether cutaneous input from different skin regions across the foot can influence proprioception at the ankle joint. The ability to passively match ankle joint position (17° and 7° plantar flexion and 7° dorsiflexion) was measured while cutaneous vibration was applied to the sole (heel, distal metatarsals) or dorsum of the target foot. Vibration was applied at two different frequencies to preferentially activate Meissner's corpuscles (45 Hz, 80 μm) or Pacinian corpuscles (255 Hz, 10 μm) at amplitudes ∼3 dB above mean perceptual thresholds. Results indicated that cutaneous input from all skin regions across the foot could influence joint-matching error and variability, although the strongest effects were observed with heel vibration. Furthermore, the influence of cutaneous input from each region was modulated by joint angle; in general, vibration had a limited effect on matching in dorsiflexion compared with matching in plantar flexion. Unlike previous results in the upper limb, we found no evidence that Pacinian input exerted a stronger influence on proprioception compared with Meissner input. Findings from this study suggest that fast-adapting cutaneous input from the foot modulates proprioception at the ankle joint in a passive joint-matching task. These results indicate that there is interplay between tactile and proprioceptive signals originating from the foot and ankle. Copyright © 2016 the American Physiological Society.

  12. Perceptual and response-dependent profiles of attention in children with ADHD.

    PubMed

    Caspersen, Ida Dyhr; Petersen, Anders; Vangkilde, Signe; Plessen, Kerstin Jessica; Habekost, Thomas

    2017-05-01

    Attention-deficit hyperactivity disorder (ADHD) is a complex developmental neuropsychiatric disorder, characterized by inattentiveness, impulsivity, and hyperactivity. Recent literature suggests a potential core deficit underlying these behaviors may involve inefficient processing when contextual stimulation is low. In order to specify this inefficiency, the aim of the present study was to disentangle perceptual and response-based deficits of attention by supplementing classic reaction time (RT) measures with an accuracy-only test. Moreover, it was explored whether ADHD symptom severity was systematically related to perceptual and response-based processes. We applied an RT-independent paradigm (Bundesen, 1990) and a sustained attention task (Dockree et al., 2006) to test visual attention in 24 recently diagnosed, medication-naïve children with ADHD, 14 clinical controls with pervasive developmental disorder, and 57 healthy controls. Outcome measures included perceptual processing speed, capacity of visual short-term memory, and errors of commission and omission. Children with ADHD processed information abnormally slow (d = 0.92), and performed poorly on RT variability and response stability (d's ranging from 0.60 to 1.08). In the ADHD group only, slowed visual processing speed was significantly related to response lapses (omission errors). This correlation was not explained by behavioral ratings of ADHD severity. Based on combined assessment of perceptual and response-dependent variables of attention, the present study demonstrates a specific cognitive profile in children with ADHD. This profile distinguishes the disorder at a basic level of attentional functioning, and may define subgroups of children with ADHD in a way that is more sensitive than clinical rating scales. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Semantic knowledge fractionations: verbal propositions vs. perceptual input? Evidence from a child with Klinefelter syndrome.

    PubMed

    Robinson, Sally J; Temple, Christine M

    2013-04-01

    This paper addresses the relative independence of different types of lexical- and factually-based semantic knowledge in JM, a 9-year-old boy with Klinefelter syndrome (KS). JM was matched to typically developing (TD) controls on the basis of chronological age. Lexical-semantic knowledge was investigated for common noun (CN) and mathematical vocabulary items (MV). Factually-based semantic knowledge was investigated for general and number facts. For CN items, JM's lexical stores were of a normal size but the volume of correct 'sensory feature' semantic knowledge he generated within verbal item descriptions was significantly reduced. He was also significantly impaired at naming item descriptions and pictures, particularly for fruit and vegetables. There was also weak object decision for fruit and vegetables. In contrast, for MV items, JM's lexical stores were elevated, with no significant difference in the amount and type of correct semantic knowledge generated within verbal item descriptions and normal naming. JM's fact retrieval accuracy was normal for all types of factual knowledge. JM's performance indicated a dissociation between the representation of CN and MV vocabulary items during development. JM's preserved semantic knowledge of facts in the face of impaired semantic knowledge of vocabulary also suggests that factually-based semantic knowledge representation is not dependent on normal lexical-semantic knowledge during development. These findings are discussed in relation to the emergence of distinct semantic knowledge representations during development, due to differing degrees of dependency upon the acquisition and representation of semantic knowledge from verbal propositions and perceptual input.

  14. Visual motion modulates pattern sensitivity ahead, behind, and beside motion

    PubMed Central

    Arnold, Derek H.; Marinovic, Welber; Whitney, David

    2014-01-01

    Retinal motion can modulate visual sensitivity. For instance, low contrast drifting waveforms (targets) can be easier to detect when abutting the leading edges of movement in adjacent high contrast waveforms (inducers), rather than the trailing edges. This target-inducer interaction is contingent on the adjacent waveforms being consistent with one another – in-phase as opposed to out-of-phase. It has been suggested that this happens because there is a perceptually explicit predictive signal at leading edges of motion that summates with low contrast physical input – a ‘predictive summation’. Another possible explanation is a phase sensitive ‘spatial summation’, a summation of physical inputs spread across the retina (not predictive signals). This should be non-selective in terms of position – it should be evident at leading, adjacent, and at trailing edges of motion. To tease these possibilities apart, we examined target sensitivity at leading, adjacent, and trailing edges of motion. We also examined target sensitivity adjacent to flicker, and for a stimulus that is less susceptible to spatial summation, as it sums to grey across a small retinal expanse. We found evidence for spatial summation in all but the last condition. Finally, we examined sensitivity to an absence of signal at leading and trailing edges of motion, finding greater sensitivity at leading edges. These results are inconsistent with the existence of a perceptually explicit predictive signal in advance of drifting waveforms. Instead, we suggest that phase-contingent target-inducer modulations of sensitivity are explicable in terms of a directionally modulated spatial summation. PMID:24699250

  15. Wide-dynamic-range forward suppression in marmoset inferior colliculus neurons is generated centrally and accounts for perceptual masking.

    PubMed

    Nelson, Paul C; Smith, Zachary M; Young, Eric D

    2009-02-25

    An organism's ability to detect and discriminate sensory inputs depends on the recent stimulus history. For example, perceptual detection thresholds for a brief tone can be elevated by as much as 50 dB when following a masking stimulus. Previous work suggests that such forward masking is not a direct result of peripheral neural adaptation; the central pathway apparently modifies the representation in a way that further attenuates the input's response to short probe signals. Here, we show that much of this transformation is complete by the level of the inferior colliculus (IC). Single-neuron extracellular responses were recorded in the central nucleus of the awake marmoset IC. The threshold for a 20 ms probe tone presented at best frequency was determined for various masker-probe delays, over a range of masker sound pressure levels (SPLs) and frequencies. The most striking aspect of the data was the increased potency of forward maskers as their SPL was increased, despite the fact that the excitatory response to the masker was often saturating or nonmonotonic over the same range of levels. This led to probe thresholds at high masker levels that were almost always higher than those observed in the auditory nerve. Probe threshold shifts were not usually caused by a persistent excitatory response to the masker; instead we propose a wide-dynamic-range inhibitory mechanism locked to sound offset as an explanation for several key aspects of the data. These findings further delineate the role of subcortical auditory processing in the generation of a context-dependent representation of ongoing acoustic scenes.

  16. Effects of degraded sensory input on memory for speech: behavioral data and a test of biologically constrained computational models.

    PubMed

    Piquado, Tepring; Cousins, Katheryn A Q; Wingfield, Arthur; Miller, Paul

    2010-12-13

    Poor hearing acuity reduces memory for spoken words, even when the words are presented with enough clarity for correct recognition. An "effortful hypothesis" suggests that the perceptual effort needed for recognition draws from resources that would otherwise be available for encoding the word in memory. To assess this hypothesis, we conducted a behavioral task requiring immediate free recall of word-lists, some of which contained an acoustically masked word that was just above perceptual threshold. Results show that masking a word reduces the recall of that word and words prior to it, as well as weakening the linking associations between the masked and prior words. In contrast, recall probabilities of words following the masked word are not affected. To account for this effect we conducted computational simulations testing two classes of models: Associative Linking Models and Short-Term Memory Buffer Models. Only a model that integrated both contextual linking and buffer components matched all of the effects of masking observed in our behavioral data. In this Linking-Buffer Model, the masked word disrupts a short-term memory buffer, causing associative links of words in the buffer to be weakened, affecting memory for the masked word and the word prior to it, while allowing links of words following the masked word to be spared. We suggest that these data account for the so-called "effortful hypothesis", where distorted input has a detrimental impact on prior information stored in short-term memory. Copyright © 2010 Elsevier B.V. All rights reserved.

  17. Evidence for an All-Or-None Perceptual Response: Single-Trial Analyses of Magnetoencephalography Signals Indicate an Abrupt Transition Between Visual Perception and Its Absence

    PubMed Central

    Sekar, Krithiga; Findley, William M.; Llinás, Rodolfo R.

    2014-01-01

    Whether consciousness is an all-or-none or graded phenomenon is an area of inquiry that has received considerable interest in neuroscience and is as of yet, still debated. In this magnetoencephalography (MEG) study we used a single stimulus paradigm with sub-threshold, threshold and supra-threshold duration inputs to assess whether stimulus perception is continuous with or abruptly differentiated from unconscious stimulus processing in the brain. By grouping epochs according to stimulus identification accuracy and exposure duration, we were able to investigate whether a high-amplitude perception-related cortical event was (1) only evoked for conditions where perception was most probable (2) had invariant amplitude once evoked and (3) was largely absent for conditions where perception was least probable (criteria satisfying an all-on-none hypothesis). We found that averaged evoked responses showed a gradual increase in amplitude with increasing perceptual strength. However, single trial analyses demonstrated that stimulus perception was correlated with an all-or-none response, the temporal precision of which increased systematically as perception transitioned from ambiguous to robust states. Due to poor signal-to-noise resolution of single trial data, whether perception-related responses, whenever present, were invariant in amplitude could not be unambiguously demonstrated. However, our findings strongly suggest that visual perception of simple stimuli is associated with an all-or-none cortical evoked response the temporal precision of which varies as a function of perceptual strength. PMID:22020091

  18. A computational developmental model for specificity and transfer in perceptual learning.

    PubMed

    Solgi, Mojtaba; Liu, Taosheng; Weng, Juyang

    2013-01-04

    How and under what circumstances the training effects of perceptual learning (PL) transfer to novel situations is critical to our understanding of generalization and abstraction in learning. Although PL is generally believed to be highly specific to the trained stimulus, a series of psychophysical studies have recently shown that training effects can transfer to untrained conditions under certain experimental protocols. In this article, we present a brain-inspired, neuromorphic computational model of the Where-What visuomotor pathways which successfully explains both the specificity and transfer of perceptual learning. The major architectural novelty is that each feature neuron has both sensory and motor inputs. The network of neurons is autonomously developed from experience, using a refined Hebbian-learning rule and lateral competition, which altogether result in neuronal recruitment. Our hypothesis is that certain paradigms of experiments trigger two-way (descending and ascending) off-task processes about the untrained condition which lead to recruitment of more neurons in lower feature representation areas as well as higher concept representation areas for the untrained condition, hence the transfer. We put forward a novel proposition that gated self-organization of the connections during the off-task processes accounts for the observed transfer effects. Simulation results showed transfer of learning across retinal locations in a Vernier discrimination task in a double-training procedure, comparable to previous psychophysical data (Xiao et al., 2008). To the best of our knowledge, this model is the first neurally-plausible model to explain both transfer and specificity in a PL setting.

  19. Quantified acoustic-optical speech signal incongruity identifies cortical sites of audiovisual speech processing

    PubMed Central

    Bernstein, Lynne E.; Lu, Zhong-Lin; Jiang, Jintao

    2008-01-01

    A fundamental question about human perception is how the speech perceiving brain combines auditory and visual phonetic stimulus information. We assumed that perceivers learn the normal relationship between acoustic and optical signals. We hypothesized that when the normal relationship is perturbed by mismatching the acoustic and optical signals, cortical areas responsible for audiovisual stimulus integration respond as a function of the magnitude of the mismatch. To test this hypothesis, in a previous study, we developed quantitative measures of acoustic-optical speech stimulus incongruity that correlate with perceptual measures. In the current study, we presented low incongruity (LI, matched), medium incongruity (MI, moderately mismatched), and high incongruity (HI, highly mismatched) audiovisual nonsense syllable stimuli during fMRI scanning. Perceptual responses differed as a function of the incongruity level, and BOLD measures were found to vary regionally and quantitatively with perceptual and quantitative incongruity levels. Each increase in level of incongruity resulted in an increase in overall levels of cortical activity and in additional activations. However, the only cortical region that demonstrated differential sensitivity to the three stimulus incongruity levels (HI > MI > LI) was a subarea of the left supramarginal gyrus (SMG). The left SMG might support a fine-grained analysis of the relationship between audiovisual phonetic input in comparison with stored knowledge, as hypothesized here. The methods here show that quantitative manipulation of stimulus incongruity is a new and powerful tool for disclosing the system that processes audiovisual speech stimuli. PMID:18495091

  20. Vibrotactile masking experiments reveal accelerated somatosensory processing in congenitally blind braille readers.

    PubMed

    Bhattacharjee, Arindam; Ye, Amanda J; Lisak, Joy A; Vargas, Maria G; Goldreich, Daniel

    2010-10-27

    Braille reading is a demanding task that requires the identification of rapidly varying tactile patterns. During proficient reading, neighboring characters impact the fingertip at ∼100 ms intervals, and adjacent raised dots within a character at 50 ms intervals. Because the brain requires time to interpret afferent sensorineural activity, among other reasons, tactile stimuli separated by such short temporal intervals pose a challenge to perception. How, then, do proficient Braille readers successfully interpret inputs arising from their fingertips at such rapid rates? We hypothesized that somatosensory perceptual consolidation occurs more rapidly in proficient Braille readers. If so, Braille readers should outperform sighted participants on masking tasks, which demand rapid perceptual processing, but would not necessarily outperform the sighted on tests of simple vibrotactile sensitivity. To investigate, we conducted two-interval forced-choice vibrotactile detection, amplitude discrimination, and masking tasks on the index fingertips of 89 sighted and 57 profoundly blind humans. Sighted and blind participants had similar unmasked detection (25 ms target tap) and amplitude discrimination (compared with 100 μm reference tap) thresholds, but congenitally blind Braille readers, the fastest readers among the blind participants, exhibited significantly less masking than the sighted (masker, 50 Hz, 50 μm; target-masker delays, ±50 and ±100 ms). Indeed, Braille reading speed correlated significantly and specifically with masking task performance, and in particular with the backward masking decay time constant. We conclude that vibrotactile sensitivity is unchanged but that perceptual processing is accelerated in congenitally blind Braille readers.

  1. Vibrotactile masking experiments reveal accelerated somatosensory processing in congenitally blind Braille readers

    PubMed Central

    Bhattacharjee, Arindam; Ye, Amanda J.; Lisak, Joy A.; Vargas, Maria G.; Goldreich, Daniel

    2010-01-01

    Braille reading is a demanding task that requires the identification of rapidly varying tactile patterns. During proficient reading, neighboring characters impact the fingertip at about 100-ms intervals, and adjacent raised dots within a character at 50-ms intervals. Because the brain requires time to interpret afferent sensorineural activity, among other reasons, tactile stimuli separated by such short temporal intervals pose a challenge to perception. How, then, do proficient Braille readers successfully interpret inputs arising from their fingertips at such rapid rates? We hypothesized that somatosensory perceptual consolidation occurs more rapidly in proficient Braille readers. If so, Braille readers should outperform sighted participants on masking tasks, which demand rapid perceptual processing, but would not necessarily outperform the sighted on tests of simple vibrotactile sensitivity. To investigate, we conducted two-interval forced-choice vibrotactile detection, amplitude discrimination, and masking tasks on the index fingertips of 89 sighted and 57 profoundly blind humans. Sighted and blind participants had similar unmasked detection (25-ms target tap) and amplitude discrimination (compared to 100-micron reference tap) thresholds, but congenitally blind Braille readers, the fastest readers among the blind participants, exhibited significantly less masking than the sighted (masker: 50-Hz, 50-micron; target-masker delays ±50 and ±100 ms). Indeed, Braille reading speed correlated significantly and specifically with masking task performance, and in particular with the backward masking decay time constant. We conclude that vibrotactile sensitivity is unchanged, but that perceptual processing is accelerated in congenitally blind Braille readers. PMID:20980584

  2. Sustained Perceptual Deficits from Transient Sensory Deprivation

    PubMed Central

    Sanes, Dan H.

    2015-01-01

    Sensory pathways display heightened plasticity during development, yet the perceptual consequences of early experience are generally assessed in adulthood. This approach does not allow one to identify transient perceptual changes that may be linked to the central plasticity observed in juvenile animals. Here, we determined whether a brief period of bilateral auditory deprivation affects sound perception in developing and adult gerbils. Animals were reared with bilateral earplugs, either from postnatal day 11 (P11) to postnatal day 23 (P23) (a manipulation previously found to disrupt gerbil cortical properties), or from P23-P35. Fifteen days after earplug removal and restoration of normal thresholds, animals were tested on their ability to detect the presence of amplitude modulation (AM), a temporal cue that supports vocal communication. Animals reared with earplugs from P11-P23 displayed elevated AM detection thresholds, compared with age-matched controls. In contrast, an identical period of earplug rearing at a later age (P23-P35) did not impair auditory perception. Although the AM thresholds of earplug-reared juveniles improved during a week of repeated testing, a subset of juveniles continued to display a perceptual deficit. Furthermore, although the perceptual deficits induced by transient earplug rearing had resolved for most animals by adulthood, a subset of adults displayed impaired performance. Control experiments indicated that earplugging did not disrupt the integrity of the auditory periphery. Together, our results suggest that P11-P23 encompasses a critical period during which sensory deprivation disrupts central mechanisms that support auditory perceptual skills. SIGNIFICANCE STATEMENT Sensory systems are particularly malleable during development. This heightened degree of plasticity is beneficial because it enables the acquisition of complex skills, such as music or language. However, this plasticity comes with a cost: nervous system development displays an increased vulnerability to the sensory environment. Here, we identify a precise developmental window during which mild hearing loss affects the maturation of an auditory perceptual cue that is known to support animal communication, including human speech. Furthermore, animals reared with transient hearing loss display deficits in perceptual learning. Our results suggest that speech and language delays associated with transient or permanent childhood hearing loss may be accounted for, in part, by deficits in central auditory processing mechanisms. PMID:26224865

  3. Abnormal Functional Brain Asymmetry in Depression: Evidence of Biologic Commonality Between Major Depression and Dysthymia

    PubMed Central

    Bruder, Gerard E.; Stewart, Jonathan W.; Hellerstein, David; Alvarenga, Jorge E.; Alschuler, Daniel; McGrath, Patrick J.

    2012-01-01

    Prior studies have found abnormalities of functional brain asymmetry in patients having a major depressive disorder (MDD). This study aimed to replicate findings of reduced right hemisphere advantage for perceiving dichotic complex tones in depressed patients, and to determine whether patients having “pure” dysthymia show the same abnormality of perceptual asymmetry as MDD. It also examined gender differences in lateralization, and the extent to which abnormalities of perceptual asymmetry in depressed patients are dependent on gender. Unmedicated patients having either a MDD (n=96) or “pure” dysthymic disorder (n=42) and healthy controls (n=114) were tested on dichotic fused-words and complex-tone tests. Patient and control groups differed in right hemisphere advantage for complex tones, but not left hemisphere advantage for words. Reduced right hemisphere advantage for tones was equally present in MDD and dysthymia, but was more evident among depressed men than depressed women. Also, healthy men had greater hemispheric asymmetry than healthy women for both words and tones, whereas this gender difference was not seen for depressed patients. Dysthymia and MDD share a common abnormality of hemispheric asymmetry for dichotic listening. PMID:22397909

  4. Abnormal functional brain asymmetry in depression: evidence of biologic commonality between major depression and dysthymia.

    PubMed

    Bruder, Gerard E; Stewart, Jonathan W; Hellerstein, David; Alvarenga, Jorge E; Alschuler, Daniel; McGrath, Patrick J

    2012-04-30

    Prior studies have found abnormalities of functional brain asymmetry in patients having a major depressive disorder (MDD). This study aimed to replicate findings of reduced right hemisphere advantage for perceiving dichotic complex tones in depressed patients, and to determine whether patients having "pure" dysthymia show the same abnormality of perceptual asymmetry as MDD. It also examined gender differences in lateralization, and the extent to which abnormalities of perceptual asymmetry in depressed patients are dependent on gender. Unmedicated patients having either a MDD (n=96) or "pure" dysthymic disorder (n=42) and healthy controls (n=114) were tested on dichotic fused-words and complex-tone tests. Patient and control groups differed in right hemisphere advantage for complex tones, but not left hemisphere advantage for words. Reduced right hemisphere advantage for tones was equally present in MDD and dysthymia, but was more evident among depressed men than depressed women. Also, healthy men had greater hemispheric asymmetry than healthy women for both words and tones, whereas this gender difference was not seen for depressed patients. Dysthymia and MDD share a common abnormality of hemispheric asymmetry for dichotic listening. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  5. How Random Is Social Behaviour? Disentangling Social Complexity through the Study of a Wild House Mouse Population

    PubMed Central

    Perony, Nicolas; Tessone, Claudio J.; König, Barbara; Schweitzer, Frank

    2012-01-01

    Out of all the complex phenomena displayed in the behaviour of animal groups, many are thought to be emergent properties of rather simple decisions at the individual level. Some of these phenomena may also be explained by random processes only. Here we investigate to what extent the interaction dynamics of a population of wild house mice (Mus domesticus) in their natural environment can be explained by a simple stochastic model. We first introduce the notion of perceptual landscape, a novel tool used here to describe the utilisation of space by the mouse colony based on the sampling of individuals in discrete locations. We then implement the behavioural assumptions of the perceptual landscape in a multi-agent simulation to verify their accuracy in the reproduction of observed social patterns. We find that many high-level features – with the exception of territoriality – of our behavioural dataset can be accounted for at the population level through the use of this simplified representation. Our findings underline the potential importance of random factors in the apparent complexity of the mice's social structure. These results resonate in the general context of adaptive behaviour versus elementary environmental interactions. PMID:23209394

  6. Auditory Spatial Layout

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  7. Can mergers-in-progress be unmerged in speech accommodation?

    PubMed

    Babel, Molly; McAuliffe, Michael; Haber, Graham

    2013-01-01

    This study examines spontaneous phonetic accommodation of a dialect with distinct categories by speakers who are in the process of merging those categories. We focus on the merger of the NEAR and SQUARE lexical sets in New Zealand English, presenting New Zealand participants with an unmerged speaker of Australian English. Mergers-in-progress are a uniquely interesting sound change as they showcase the asymmetry between speech perception and production. Yet, we examine mergers using spontaneous phonetic imitation, which is phenomenon that is necessarily a behavior where perceptual input influences speech production. Phonetic imitation is quantified by a perceptual measure and an acoustic calculation of mergedness using a Pillai-Bartlett trace. The results from both analyses indicate spontaneous phonetic imitation is moderated by extra-linguistic factors such as the valence of assigned conditions and social bias. We also find evidence for a decrease in the degree of mergedness in post-exposure productions. Taken together, our results suggest that under the appropriate conditions New Zealanders phonetically accommodate to Australian English and that in the process of speech imitation, mergers-in-progress can, but do not consistently, become less merged.

  8. Can mergers-in-progress be unmerged in speech accommodation?

    PubMed Central

    Babel, Molly; McAuliffe, Michael; Haber, Graham

    2013-01-01

    This study examines spontaneous phonetic accommodation of a dialect with distinct categories by speakers who are in the process of merging those categories. We focus on the merger of the NEAR and SQUARE lexical sets in New Zealand English, presenting New Zealand participants with an unmerged speaker of Australian English. Mergers-in-progress are a uniquely interesting sound change as they showcase the asymmetry between speech perception and production. Yet, we examine mergers using spontaneous phonetic imitation, which is phenomenon that is necessarily a behavior where perceptual input influences speech production. Phonetic imitation is quantified by a perceptual measure and an acoustic calculation of mergedness using a Pillai-Bartlett trace. The results from both analyses indicate spontaneous phonetic imitation is moderated by extra-linguistic factors such as the valence of assigned conditions and social bias. We also find evidence for a decrease in the degree of mergedness in post-exposure productions. Taken together, our results suggest that under the appropriate conditions New Zealanders phonetically accommodate to Australian English and that in the process of speech imitation, mergers-in-progress can, but do not consistently, become less merged. PMID:24069011

  9. Cognitive, perceptual and action-oriented representations of falling objects.

    PubMed

    Zago, Myrka; Lacquaniti, Francesco

    2005-01-01

    We interact daily with moving objects. How accurate are our predictions about objects' motions? What sources of information do we use? These questions have received wide attention from a variety of different viewpoints. On one end of the spectrum are the ecological approaches assuming that all the information about the visual environment is present in the optic array, with no need to postulate conscious or unconscious representations. On the other end of the spectrum are the constructivist approaches assuming that a more or less accurate representation of the external world is built in the brain using explicit or implicit knowledge or memory besides sensory inputs. Representations can be related to naive physics or to context cue-heuristics or to the construction of internal copies of environmental invariants. We address the issue of prediction of objects' fall at different levels. Cognitive understanding and perceptual judgment of simple Newtonian dynamics can be surprisingly inaccurate. By contrast, motor interactions with falling objects are often very accurate. We argue that the pragmatic action-oriented behaviour and the perception-oriented behaviour may use different modes of operation and different levels of representation.

  10. Perceptual and academic patterns of learning-disabled/gifted students.

    PubMed

    Waldron, K A; Saphire, D G

    1992-04-01

    This research explored ways gifted children with learning disabilities perceive and recall auditory and visual input and apply this information to reading, mathematics, and spelling. 24 learning-disabled/gifted children and a matched control group of normally achieving gifted students were tested for oral reading, word recognition and analysis, listening comprehension, and spelling. In mathematics, they were tested for numeration, mental and written computation, word problems, and numerical reasoning. To explore perception and memory skills, students were administered formal tests of visual and auditory memory as well as auditory discrimination of sounds. Their responses to reading and to mathematical computations were further considered for evidence of problems in visual discrimination, visual sequencing, and visual spatial areas. Analyses indicated that these learning-disabled/gifted students were significantly weaker than controls in their decoding skills, in spelling, and in most areas of mathematics. They were also significantly weaker in auditory discrimination and memory, and in visual discrimination, sequencing, and spatial abilities. Conclusions are that these underlying perceptual and memory deficits may be related to students' academic problems.

  11. Modeling the Effects of Perceptual Load: Saliency, Competitive Interactions, and Top-Down Biases

    PubMed Central

    Neokleous, Kleanthis; Shimi, Andria; Avraamides, Marios N.

    2016-01-01

    A computational model of visual selective attention has been implemented to account for experimental findings on the Perceptual Load Theory (PLT) of attention. The model was designed based on existing neurophysiological findings on attentional processes with the objective to offer an explicit and biologically plausible formulation of PLT. Simulation results verified that the proposed model is capable of capturing the basic pattern of results that support the PLT as well as findings that are considered contradictory to the theory. Importantly, the model is able to reproduce the behavioral results from a dilution experiment, providing thus a way to reconcile PLT with the competing Dilution account. Overall, the model presents a novel account for explaining PLT effects on the basis of the low-level competitive interactions among neurons that represent visual input and the top-down signals that modulate neural activity. The implications of the model concerning the debate on the locus of selective attention as well as the origins of distractor interference in visual displays of varying load are discussed. PMID:26858668

  12. Auditory false perception in schizophrenia: Development and validation of auditory signal detection task.

    PubMed

    Chhabra, Harleen; Sowmya, Selvaraj; Sreeraj, Vanteemar S; Kalmady, Sunil V; Shivakumar, Venkataram; Amaresha, Anekal C; Narayanaswamy, Janardhanan C; Venkatasubramanian, Ganesan

    2016-12-01

    Auditory hallucinations constitute an important symptom component in 70-80% of schizophrenia patients. These hallucinations are proposed to occur due to an imbalance between perceptual expectation and external input, resulting in attachment of meaning to abstract noises; signal detection theory has been proposed to explain these phenomena. In this study, we describe the development of an auditory signal detection task using a carefully chosen set of English words that could be tested successfully in schizophrenia patients coming from varying linguistic, cultural and social backgrounds. Schizophrenia patients with significant auditory hallucinations (N=15) and healthy controls (N=15) performed the auditory signal detection task wherein they were instructed to differentiate between a 5-s burst of plain white noise and voiced-noise. The analysis showed that false alarms (p=0.02), discriminability index (p=0.001) and decision bias (p=0.004) were significantly different between the two groups. There was a significant negative correlation between false alarm rate and decision bias. These findings extend further support for impaired perceptual expectation system in schizophrenia patients. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Experience-driven plasticity in binocular vision

    PubMed Central

    Klink, P. Christiaan; Brascamp, Jan W.; Blake, Randolph; van Wezel, Richard J.A.

    2010-01-01

    Summary Experience-driven neuronal plasticity allows the brain to adapt its functional connectivity to recent sensory input. Here we use binocular rivalry [1], an experimental paradigm where conflicting images are presented to the individual eyes, to demonstrate plasticity in the neuronal mechanisms that convert visual information from two separated retinas into single perceptual experiences. Perception during binocular rivalry tended to initially consist of alternations between exclusive representations of monocularly defined images, but upon prolonged exposure, mixture percepts became more prevalent. The completeness of suppression, reflected in the incidence of mixture percepts, plausibly reflects the strength of inhibition that likely plays a role in binocular rivalry [2]. Recovery of exclusivity was possible, but required highly specific binocular stimulation. Documenting the prerequisites for these observed changes in perceptual exclusivity, our experiments suggest experience-driven plasticity at interocular inhibitory synapses, driven by the (lack of) correlated activity of neurons representing the conflicting stimuli. This form of plasticity is consistent with a previously proposed, but largely untested, anti-Hebbian learning mechanism for inhibitory synapses in vision [3, 4]. Our results implicate experience-driven plasticity as one governing principle in the neuronal organization of binocular vision. PMID:20674360

  14. Connectivity in the human brain dissociates entropy and complexity of auditory inputs.

    PubMed

    Nastase, Samuel A; Iacovella, Vittorio; Davis, Ben; Hasson, Uri

    2015-03-01

    Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. Copyright © 2014. Published by Elsevier Inc.

  15. Neurocognitive and Neuroplastic Mechanisms of Novel Clinical Signs in CRPS.

    PubMed

    Kuttikat, Anoop; Noreika, Valdas; Shenker, Nicholas; Chennu, Srivas; Bekinschtein, Tristan; Brown, Christopher Andrew

    2016-01-01

    Complex regional pain syndrome (CRPS) is a chronic, debilitating pain condition that usually arises after trauma to a limb, but its precise etiology remains elusive. Novel clinical signs based on body perceptual disturbances have been reported, but their pathophysiological mechanisms remain poorly understood. Investigators have used functional neuroimaging techniques (including MEG, EEG, fMRI, and PET) to study changes mainly within the somatosensory and motor cortices. Here, we provide a focused review of the neuroimaging research findings that have generated insights into the potential neurocognitive and neuroplastic mechanisms underlying perceptual disturbances in CRPS. Neuroimaging findings, particularly with regard to somatosensory processing, have been promising but limited by a number of technique-specific factors (such as the complexity of neuroimaging investigations, poor spatial resolution of EEG/MEG, and use of modeling procedures that do not draw causal inferences) and more general factors including small samples sizes and poorly characterized patients. These factors have led to an underappreciation of the potential heterogeneity of pathophysiology that may underlie variable clinical presentation in CRPS. Also, until now, neurological deficits have been predominantly investigated separately from perceptual and cognitive disturbances. Here, we highlight the need to identify neurocognitive phenotypes of patients with CRPS that are underpinned by causal explanations for perceptual disturbances. We suggest that a combination of larger cohorts, patient phenotyping, the use of both high temporal, and spatial resolution neuroimaging methods, and the identification of simplified biomarkers is likely to be the most fruitful approach to identifying neurocognitive phenotypes in CRPS. Based on our review, we explain how such phenotypes could be characterized in terms of hierarchical models of perception and corresponding disturbances in recurrent processing involving the somatosensory, salience and executive brain networks. We also draw attention to complementary neurological factors that may explain some CRPS symptoms, including the possibility of central neuroinflammation and neuronal atrophy, and how these phenomena may overlap but be partially separable from neurocognitive deficits.

  16. Neurocognitive and Neuroplastic Mechanisms of Novel Clinical Signs in CRPS

    PubMed Central

    Kuttikat, Anoop; Noreika, Valdas; Shenker, Nicholas; Chennu, Srivas; Bekinschtein, Tristan; Brown, Christopher Andrew

    2016-01-01

    Complex regional pain syndrome (CRPS) is a chronic, debilitating pain condition that usually arises after trauma to a limb, but its precise etiology remains elusive. Novel clinical signs based on body perceptual disturbances have been reported, but their pathophysiological mechanisms remain poorly understood. Investigators have used functional neuroimaging techniques (including MEG, EEG, fMRI, and PET) to study changes mainly within the somatosensory and motor cortices. Here, we provide a focused review of the neuroimaging research findings that have generated insights into the potential neurocognitive and neuroplastic mechanisms underlying perceptual disturbances in CRPS. Neuroimaging findings, particularly with regard to somatosensory processing, have been promising but limited by a number of technique-specific factors (such as the complexity of neuroimaging investigations, poor spatial resolution of EEG/MEG, and use of modeling procedures that do not draw causal inferences) and more general factors including small samples sizes and poorly characterized patients. These factors have led to an underappreciation of the potential heterogeneity of pathophysiology that may underlie variable clinical presentation in CRPS. Also, until now, neurological deficits have been predominantly investigated separately from perceptual and cognitive disturbances. Here, we highlight the need to identify neurocognitive phenotypes of patients with CRPS that are underpinned by causal explanations for perceptual disturbances. We suggest that a combination of larger cohorts, patient phenotyping, the use of both high temporal, and spatial resolution neuroimaging methods, and the identification of simplified biomarkers is likely to be the most fruitful approach to identifying neurocognitive phenotypes in CRPS. Based on our review, we explain how such phenotypes could be characterized in terms of hierarchical models of perception and corresponding disturbances in recurrent processing involving the somatosensory, salience and executive brain networks. We also draw attention to complementary neurological factors that may explain some CRPS symptoms, including the possibility of central neuroinflammation and neuronal atrophy, and how these phenomena may overlap but be partially separable from neurocognitive deficits. PMID:26858626

  17. Dissociating Cortical Activity during Processing of Native and Non-Native Audiovisual Speech from Early to Late Infancy

    PubMed Central

    Fava, Eswen; Hull, Rachel; Bortfeld, Heather

    2014-01-01

    Initially, infants are capable of discriminating phonetic contrasts across the world’s languages. Starting between seven and ten months of age, they gradually lose this ability through a process of perceptual narrowing. Although traditionally investigated with isolated speech sounds, such narrowing occurs in a variety of perceptual domains (e.g., faces, visual speech). Thus far, tracking the developmental trajectory of this tuning process has been focused primarily on auditory speech alone, and generally using isolated sounds. But infants learn from speech produced by people talking to them, meaning they learn from a complex audiovisual signal. Here, we use near-infrared spectroscopy to measure blood concentration changes in the bilateral temporal cortices of infants in three different age groups: 3-to-6 months, 7-to-10 months, and 11-to-14-months. Critically, all three groups of infants were tested with continuous audiovisual speech in both their native and another, unfamiliar language. We found that at each age range, infants showed different patterns of cortical activity in response to the native and non-native stimuli. Infants in the youngest group showed bilateral cortical activity that was greater overall in response to non-native relative to native speech; the oldest group showed left lateralized activity in response to native relative to non-native speech. These results highlight perceptual tuning as a dynamic process that happens across modalities and at different levels of stimulus complexity. PMID:25116572

  18. Enhanced pure-tone pitch discrimination among persons with autism but not Asperger syndrome.

    PubMed

    Bonnel, Anna; McAdams, Stephen; Smith, Bennett; Berthiaume, Claude; Bertone, Armando; Ciocca, Valter; Burack, Jacob A; Mottron, Laurent

    2010-07-01

    Persons with Autism spectrum disorders (ASD) display atypical perceptual processing in visual and auditory tasks. In vision, Bertone, Mottron, Jelenic, and Faubert (2005) found that enhanced and diminished visual processing is linked to the level of neural complexity required to process stimuli, as proposed in the neural complexity hypothesis. Based on these findings, Samson, Mottron, Jemel, Belin, and Ciocca (2006) proposed to extend the neural complexity hypothesis to the auditory modality. They hypothesized that persons with ASD should display enhanced performance for simple tones that are processed in primary auditory cortical regions, but diminished performance for complex tones that require additional processing in associative auditory regions, in comparison to typically developing individuals. To assess this hypothesis, we designed four auditory discrimination experiments targeting pitch, non-vocal and vocal timbre, and loudness. Stimuli consisted of spectro-temporally simple and complex tones. The participants were adolescents and young adults with autism, Asperger syndrome, and typical developmental histories, all with IQs in the normal range. Consistent with the neural complexity hypothesis and enhanced perceptual functioning model of ASD (Mottron, Dawson, Soulières, Hubert, & Burack, 2006), the participants with autism, but not with Asperger syndrome, displayed enhanced pitch discrimination for simple tones. However, no discrimination-thresholds differences were found between the participants with ASD and the typically developing persons across spectrally and temporally complex conditions. These findings indicate that enhanced pure-tone pitch discrimination may be a cognitive correlate of speech-delay among persons with ASD. However, auditory discrimination among this group does not appear to be directly contingent on the spectro-temporal complexity of the stimuli. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  19. Establishing a learning foundation in a dynamically changing world: Insights from artificial language work

    NASA Astrophysics Data System (ADS)

    Gonzales, Kalim

    It is argued that infants build a foundation for learning about the world through their incidental acquisition of the spatial and temporal regularities surrounding them. A challenge is that learning occurs across multiple contexts whose statistics can greatly differ. Two artificial language studies with 12-month-olds demonstrate that infants come prepared to parse statistics across contexts using the temporal and perceptual features that distinguish one context from another. These results suggest that infants can organize their statistical input with a wider range of features that typically considered. Possible attention, decision making, and memory mechanisms are discussed.

  20. The Development of Ambiguous Figure Perception

    ERIC Educational Resources Information Center

    Wimmer, Marina C.; Doherty, Martin J.

    2011-01-01

    Ambiguous figures have fascinated researchers for almost 200 years. The physical properties of these figures remain constant, yet two distinct interpretations are possible; these reverse (switch) from one percept to the other. The consensus is that reversal requires complex interaction of perceptual bottom-up and cognitive top-down elements. The…

  1. Perceptual Development on the Rorschach

    ERIC Educational Resources Information Center

    O'Neill, Patrick; And Others

    1976-01-01

    The Rorschach was given to 60 school children in two designs: chronological age (CA) and mental age (MA) orthogonal and CA=MA. Responses were scored for Form Accuracy, Complexity, Movement and Friedman's Developmental Level (DL) Scoring System. The results suggest that the DL system does assess MA independently of CA. (Author/DEP)

  2. Managing breathlessness: providing comfort at the end of life.

    PubMed

    Tice, Martha A

    2006-04-01

    Dyspnea is a common symptom at the end of life. It occurs as the result of a complex mix of physical, biochemical, and perceptual components. When patients and their healthcare providers focus on the "numbers" related to oxygenation, rather than comfort, the individual's quality of life can suffer.

  3. FILMIC COMMUNICATION AND COMPLEX LEARNING. WORKING PAPER NO. 4.

    ERIC Educational Resources Information Center

    PRYLUCK, CALVIN

    RESEARCH AND EXPERIENCE SHOW THAT FILM IS MORE EFFECTIVE IN FACTUAL LEARNING AND IN PERCEPTUAL MOTOR LEARNING THAN IN TEACHING RATIONAL ACTIVITIES. LANGUAGE AND FILM HAVE DIFFERENT STRUCTURES WHICH DETERMINE THEIR FUNCTIONS IN INSTRUCTIONAL SETTINGS. ESSENTIALLY, PICTURES ARE INDUCTIVE WHILE LANGUAGE IS DEDUCTIVE. LANGUAGE IS CAPABLE OF NUMBERLESS…

  4. Connecting Instances to Promote Children's Relational Reasoning

    ERIC Educational Resources Information Center

    Son, Ji Y.; Smith, Linda B.; Goldstone, Robert L.

    2011-01-01

    The practice of learning from multiple instances seems to allow children to learn about relational structure. The experiments reported here focused on two issues regarding relational learning from multiple instances: (a) what kind of perceptual situations foster such learning and (b) how particular object properties, such as complexity and…

  5. Auditory Discrimination of Frequency Ratios: The Octave Singularity

    ERIC Educational Resources Information Center

    Bonnard, Damien; Micheyl, Christophe; Semal, Catherine; Dauman, Rene; Demany, Laurent

    2013-01-01

    Sensitivity to frequency ratios is essential for the perceptual processing of complex sounds and the appreciation of music. This study assessed the effect of ratio simplicity on ratio discrimination for pure tones presented either simultaneously or sequentially. Each stimulus consisted of four 100-ms pure tones, equally spaced in terms of…

  6. Can the self become another? Investigating the effects of self-association with a new facial identity.

    PubMed

    Payne, Sophie; Tsakiris, Manos; Maister, Lara

    2017-06-01

    The mental representation of the self is a complex construct, comprising both conceptual information and perceptual information regarding the body. Evidence suggests that both the conceptual self-representation and the bodily self-representation are malleable, and that these different aspects of the self are linked. Changes in bodily self-representation appear to affect how the self is conceptualized, but it is unclear whether the opposite relationship is also true: Do changes to the conceptual self-representation affect how the physical self is perceived? First, we adopted a perceptual matching paradigm to establish an association between the self and an unfamiliar face (Experiment 1). Robust attentional and perceptual biases in the processing of this newly self-associated object suggested that the conceptual self-representation was extended to include it. Next, we measured whether the bodily self-representation had correspondingly changed to incorporate the new face (Experiment 2). Participants rated morphs between their own and the newly-associated according to how similar they were to the self, before and after performing the perceptual matching task. Changes to the conceptual self did not have an effect on the bodily self-representation. These results suggest that modulatory links between aspects of the mental self-representation, when focused on the non-social self, are unidirectional and flow in a bottom-up manner.

  7. Forced to remember: when memory is biased by salient information.

    PubMed

    Santangelo, Valerio

    2015-04-15

    The last decades have seen a rapid growing in the attempt to understand the key factors involved in the internal memory representation of the external world. Visual salience have been found to provide a major contribution in predicting the probability for an item/object embedded in a complex setting (i.e., a natural scene) to be encoded and then remembered later on. Here I review the existing literature highlighting the impact of perceptual- (based on low-level sensory features) and semantics-related salience (based on high-level knowledge) on short-term memory representation, along with the neural mechanisms underpinning the interplay between these factors. The available evidence reveal that both perceptual- and semantics-related factors affect attention selection mechanisms during the encoding of natural scenes. Biasing internal memory representation, both perceptual and semantics factors increase the probability to remember high- to the detriment of low-saliency items. The available evidence also highlight an interplay between these factors, with a reduced impact of perceptual-related salience in biasing memory representation as a function of the increasing availability of semantics-related salient information. The neural mechanisms underpinning this interplay involve the activation of different portions of the frontoparietal attention control network. Ventral regions support the assignment of selection/encoding priorities based on high-level semantics, while the involvement of dorsal regions reflects priorities assignment based on low-level sensory features. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Facial decoding in schizophrenia is underpinned by basic visual processing impairments.

    PubMed

    Belge, Jan-Baptist; Maurage, Pierre; Mangelinckx, Camille; Leleux, Dominique; Delatte, Benoît; Constant, Eric

    2017-09-01

    Schizophrenia is associated with a strong deficit in the decoding of emotional facial expression (EFE). Nevertheless, it is still unclear whether this deficit is specific for emotions or due to a more general impairment for any type of facial processing. This study was designed to clarify this issue. Thirty patients suffering from schizophrenia and 30 matched healthy controls performed several tasks evaluating the recognition of both changeable (i.e. eyes orientation and emotions) and stable (i.e. gender, age) facial characteristics. Accuracy and reaction times were recorded. Schizophrenic patients presented a performance deficit (accuracy and reaction times) in the perception of both changeable and stable aspects of faces, without any specific deficit for emotional decoding. Our results demonstrate a generalized face recognition deficit in schizophrenic patients, probably caused by a perceptual deficit in basic visual processing. It seems that the deficit in the decoding of emotional facial expression (EFE) is not a specific deficit of emotion processing, but is at least partly related to a generalized perceptual deficit in lower-level perceptual processing, occurring before the stage of emotion processing, and underlying more complex cognitive dysfunctions. These findings should encourage future investigations to explore the neurophysiologic background of these generalized perceptual deficits, and stimulate a clinical approach focusing on more basic visual processing. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  9. Generating a taxonomy of spatially cued attention for visual discrimination: Effects of judgment precision and set size on attention

    PubMed Central

    Hetley, Richard; Dosher, Barbara Anne; Lu, Zhong-Lin

    2014-01-01

    Attention precues improve the performance of perceptual tasks in many but not all circumstances. These spatial attention effects may depend upon display set size or workload, and have been variously attributed to external noise filtering, stimulus enhancement, contrast gain, or response gain, or to uncertainty or other decision effects. In this study, we document systematically different effects of spatial attention in low- and high-precision judgments, with and without external noise, and in different set sizes in order to contribute to the development of a taxonomy of spatial attention. An elaborated perceptual template model (ePTM) provides an integrated account of a complex set of effects of spatial attention with just two attention factors: a set-size dependent exclusion or filtering of external noise and a narrowing of the perceptual template to focus on the signal stimulus. These results are related to the previous literature by classifying the judgment precision and presence of external noise masks in those experiments, suggesting a taxonomy of spatially cued attention in discrimination accuracy. PMID:24939234

  10. Generating a taxonomy of spatially cued attention for visual discrimination: effects of judgment precision and set size on attention.

    PubMed

    Hetley, Richard; Dosher, Barbara Anne; Lu, Zhong-Lin

    2014-11-01

    Attention precues improve the performance of perceptual tasks in many but not all circumstances. These spatial attention effects may depend upon display set size or workload, and have been variously attributed to external noise filtering, stimulus enhancement, contrast gain, or response gain, or to uncertainty or other decision effects. In this study, we document systematically different effects of spatial attention in low- and high-precision judgments, with and without external noise, and in different set sizes in order to contribute to the development of a taxonomy of spatial attention. An elaborated perceptual template model (ePTM) provides an integrated account of a complex set of effects of spatial attention with just two attention factors: a set-size dependent exclusion or filtering of external noise and a narrowing of the perceptual template to focus on the signal stimulus. These results are related to the previous literature by classifying the judgment precision and presence of external noise masks in those experiments, suggesting a taxonomy of spatially cued attention in discrimination accuracy.

  11. Performance evaluation of objective quality metrics for HDR image compression

    NASA Astrophysics Data System (ADS)

    Valenzise, Giuseppe; De Simone, Francesca; Lauga, Paul; Dufaux, Frederic

    2014-09-01

    Due to the much larger luminance and contrast characteristics of high dynamic range (HDR) images, well-known objective quality metrics, widely used for the assessment of low dynamic range (LDR) content, cannot be directly applied to HDR images in order to predict their perceptual fidelity. To overcome this limitation, advanced fidelity metrics, such as the HDR-VDP, have been proposed to accurately predict visually significant differences. However, their complex calibration may make them difficult to use in practice. A simpler approach consists in computing arithmetic or structural fidelity metrics, such as PSNR and SSIM, on perceptually encoded luminance values but the performance of quality prediction in this case has not been clearly studied. In this paper, we aim at providing a better comprehension of the limits and the potentialities of this approach, by means of a subjective study. We compare the performance of HDR-VDP to that of PSNR and SSIM computed on perceptually encoded luminance values, when considering compressed HDR images. Our results show that these simpler metrics can be effectively employed to assess image fidelity for applications such as HDR image compression.

  12. Brilliance, contrast, colorfulness, and the perceived volume of device color gamut

    NASA Astrophysics Data System (ADS)

    Heckaman, Rodney L.

    With the advent of digital video and cinema media technologies, much more is possible in achieving brighter and more vibrant colors, colors that transcend our experience. The challenge is in the realization of these possibilities in an industry rooted in 1950s technology where color gamut is represented with little or no insight into the way an observer perceives color as a complex mixture of the observer's intentions, desires, and interests. By today's standards, five perceptual attributes---brightness, lightness, colorfulness, chroma, and hue---are believed to be required for a complete specification. As a compelling case for such a representation, a display system is demonstrated that is capable of displaying color beyond the realm of object color, perceptually even beyond the spectrum locus of pure color. All this begs the question: Just what is meant by perceptual gamut? To this end, the attributes of perceptual gamut are identified through psychometric testing and the color appearance models CIELAB and CIECAM02. Then, by way of demonstration, these attributes were manipulated to test their application in wide gamut displays. In concert with these perceptual attributes and their manipulation, Ralph M. Evans' concept of brilliance as an attribute of perception that extends beyond the realm of everyday experience, and the theoretical studies of brilliance by Y. Nayatani, a method was developed for producing brighter, more colorful colors and deeper, darker colors with the aim of preserving object color perception---flesh tones in particular. The method was successfully demonstrated and tested in real images using psychophysical methods in the very real, practical application of expanding the gamut of sRGB into an emulation of the wide gamut, xvYCC encoding.

  13. Perceptual-Cognitive Changes During Motor Learning: The Influence of Mental and Physical Practice on Mental Representation, Gaze Behavior, and Performance of a Complex Action

    PubMed Central

    Frank, Cornelia; Land, William M.; Schack, Thomas

    2016-01-01

    Despite the wealth of research on differences between experts and novices with respect to their perceptual-cognitive background (e.g., mental representations, gaze behavior), little is known about the change of these perceptual-cognitive components over the course of motor learning. In the present study, changes in one’s mental representation, quiet eye behavior, and outcome performance were examined over the course of skill acquisition as it related to physical and mental practice. Novices (N = 45) were assigned to one of three conditions: physical practice, combined physical plus mental practice, and no practice. Participants in the practice groups trained on a golf putting task over the course of 3 days, either by repeatedly executing the putt, or by both executing and imaging the putt. Findings revealed improvements in putting performance across both practice conditions. Regarding the perceptual-cognitive changes, participants practicing mentally and physically revealed longer quiet eye durations as well as more elaborate representation structures in comparison to the control group, while this was not the case for participants who underwent physical practice only. Thus, in the present study, combined mental and physical practice led to both formation of mental representations in long-term memory and longer quiet eye durations. Interestingly, the length of the quiet eye directly related to the degree of elaborateness of the underlying mental representation, supporting the notion that the quiet eye reflects cognitive processing. This study is the first to show that the quiet eye becomes longer in novices practicing a motor action. Moreover, the findings of the present study suggest that perceptual and cognitive adaptations co-occur over the course of motor learning. PMID:26779089

  14. A Complex Story: Universal Preference vs. Individual Differences Shaping Aesthetic Response to Fractals Patterns.

    PubMed

    Street, Nichola; Forsythe, Alexandra M; Reilly, Ronan; Taylor, Richard; Helmy, Mai S

    2016-01-01

    Fractal patterns offer one way to represent the rough complexity of the natural world. Whilst they dominate many of our visual experiences in nature, little large-scale perceptual research has been done to explore how we respond aesthetically to these patterns. Previous research (Taylor et al., 2011) suggests that the fractal patterns with mid-range fractal dimensions (FDs) have universal aesthetic appeal. Perceptual and aesthetic responses to visual complexity have been more varied with findings suggesting both linear (Forsythe et al., 2011) and curvilinear (Berlyne, 1970) relationships. Individual differences have been found to account for many of the differences we see in aesthetic responses but some, such as culture, have received little attention within the fractal and complexity research fields. This two-study article aims to test preference responses to FD and visual complexity, using a large cohort (N = 443) of participants from around the world to allow universality claims to be tested. It explores the extent to which age, culture and gender can predict our preferences for fractally complex patterns. Following exploratory analysis that found strong correlations between FD and visual complexity, a series of linear mixed-effect models were implemented to explore if each of the individual variables could predict preference. The first tested a linear complexity model (likelihood of selecting the more complex image from the pair of images) and the second a mid-range FD model (likelihood of selecting an image within mid-range). Results show that individual differences can reliably predict preferences for complexity across culture, gender and age. However, in fitting with current findings the mid-range models show greater consistency in preference not mediated by gender, age or culture. This article supports the established theory that the mid-range fractal patterns appear to be a universal construct underlying preference but also highlights the fragility of universal claims by demonstrating individual differences in preference for the interrelated concept of visual complexity. This highlights a current stalemate in the field of empirical aesthetics.

  15. Is attention based on spatial contextual memory preferentially guided by low spatial frequency signals?

    PubMed

    Patai, Eva Zita; Buckley, Alice; Nobre, Anna Christina

    2013-01-01

    A popular model of visual perception states that coarse information (carried by low spatial frequencies) along the dorsal stream is rapidly transmitted to prefrontal and medial temporal areas, activating contextual information from memory, which can in turn constrain detailed input carried by high spatial frequencies arriving at a slower rate along the ventral visual stream, thus facilitating the processing of ambiguous visual stimuli. We were interested in testing whether this model contributes to memory-guided orienting of attention. In particular, we asked whether global, low-spatial frequency (LSF) inputs play a dominant role in triggering contextual memories in order to facilitate the processing of the upcoming target stimulus. We explored this question over four experiments. The first experiment replicated the LSF advantage reported in perceptual discrimination tasks by showing that participants were faster and more accurate at matching a low spatial frequency version of a scene, compared to a high spatial frequency version, to its original counterpart in a forced-choice task. The subsequent three experiments tested the relative contributions of low versus high spatial frequencies during memory-guided covert spatial attention orienting tasks. Replicating the effects of memory-guided attention, pre-exposure to scenes associated with specific spatial memories for target locations (memory cues) led to higher perceptual discrimination and faster response times to identify targets embedded in the scenes. However, either high or low spatial frequency cues were equally effective; LSF signals did not selectively or preferentially contribute to the memory-driven attention benefits to performance. Our results challenge a generalized model that LSFs activate contextual memories, which in turn bias attention and facilitate perception.

  16. The influence of spatiotemporal structure of noisy stimuli in decision making.

    PubMed

    Insabato, Andrea; Dempere-Marco, Laura; Pannunzi, Mario; Deco, Gustavo; Romo, Ranulfo

    2014-04-01

    Decision making is a process of utmost importance in our daily lives, the study of which has been receiving notable attention for decades. Nevertheless, the neural mechanisms underlying decision making are still not fully understood. Computational modeling has revealed itself as a valuable asset to address some of the fundamental questions. Biophysically plausible models, in particular, are useful in bridging the different levels of description that experimental studies provide, from the neural spiking activity recorded at the cellular level to the performance reported at the behavioral level. In this article, we have reviewed some of the recent progress made in the understanding of the neural mechanisms that underlie decision making. We have performed a critical evaluation of the available results and address, from a computational perspective, aspects of both experimentation and modeling that so far have eluded comprehension. To guide the discussion, we have selected a central theme which revolves around the following question: how does the spatiotemporal structure of sensory stimuli affect the perceptual decision-making process? This question is a timely one as several issues that still remain unresolved stem from this central theme. These include: (i) the role of spatiotemporal input fluctuations in perceptual decision making, (ii) how to extend the current results and models derived from two-alternative choice studies to scenarios with multiple competing evidences, and (iii) to establish whether different types of spatiotemporal input fluctuations affect decision-making outcomes in distinctive ways. And although we have restricted our discussion mostly to visual decisions, our main conclusions are arguably generalizable; hence, their possible extension to other sensory modalities is one of the points in our discussion.

  17. The Influence of Spatiotemporal Structure of Noisy Stimuli in Decision Making

    PubMed Central

    Deco, Gustavo; Romo, Ranulfo

    2014-01-01

    Decision making is a process of utmost importance in our daily lives, the study of which has been receiving notable attention for decades. Nevertheless, the neural mechanisms underlying decision making are still not fully understood. Computational modeling has revealed itself as a valuable asset to address some of the fundamental questions. Biophysically plausible models, in particular, are useful in bridging the different levels of description that experimental studies provide, from the neural spiking activity recorded at the cellular level to the performance reported at the behavioral level. In this article, we have reviewed some of the recent progress made in the understanding of the neural mechanisms that underlie decision making. We have performed a critical evaluation of the available results and address, from a computational perspective, aspects of both experimentation and modeling that so far have eluded comprehension. To guide the discussion, we have selected a central theme which revolves around the following question: how does the spatiotemporal structure of sensory stimuli affect the perceptual decision-making process? This question is a timely one as several issues that still remain unresolved stem from this central theme. These include: (i) the role of spatiotemporal input fluctuations in perceptual decision making, (ii) how to extend the current results and models derived from two-alternative choice studies to scenarios with multiple competing evidences, and (iii) to establish whether different types of spatiotemporal input fluctuations affect decision-making outcomes in distinctive ways. And although we have restricted our discussion mostly to visual decisions, our main conclusions are arguably generalizable; hence, their possible extension to other sensory modalities is one of the points in our discussion. PMID:24743140

  18. Is Attention Based on Spatial Contextual Memory Preferentially Guided by Low Spatial Frequency Signals?

    PubMed Central

    Patai, Eva Zita; Buckley, Alice; Nobre, Anna Christina

    2013-01-01

    A popular model of visual perception states that coarse information (carried by low spatial frequencies) along the dorsal stream is rapidly transmitted to prefrontal and medial temporal areas, activating contextual information from memory, which can in turn constrain detailed input carried by high spatial frequencies arriving at a slower rate along the ventral visual stream, thus facilitating the processing of ambiguous visual stimuli. We were interested in testing whether this model contributes to memory-guided orienting of attention. In particular, we asked whether global, low-spatial frequency (LSF) inputs play a dominant role in triggering contextual memories in order to facilitate the processing of the upcoming target stimulus. We explored this question over four experiments. The first experiment replicated the LSF advantage reported in perceptual discrimination tasks by showing that participants were faster and more accurate at matching a low spatial frequency version of a scene, compared to a high spatial frequency version, to its original counterpart in a forced-choice task. The subsequent three experiments tested the relative contributions of low versus high spatial frequencies during memory-guided covert spatial attention orienting tasks. Replicating the effects of memory-guided attention, pre-exposure to scenes associated with specific spatial memories for target locations (memory cues) led to higher perceptual discrimination and faster response times to identify targets embedded in the scenes. However, either high or low spatial frequency cues were equally effective; LSF signals did not selectively or preferentially contribute to the memory-driven attention benefits to performance. Our results challenge a generalized model that LSFs activate contextual memories, which in turn bias attention and facilitate perception. PMID:23776509

  19. Distortions of Subjective Time Perception Within and Across Senses

    PubMed Central

    van Wassenhove, Virginie; Buonomano, Dean V.; Shimojo, Shinsuke; Shams, Ladan

    2008-01-01

    Background The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood. Methodology/Findings We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations. Conclusions/Significance These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions. PMID:18197248

  20. Fast repurposing of high-resolution stereo video content for mobile use

    NASA Astrophysics Data System (ADS)

    Karaoglu, Ali; Lee, Bong Ho; Boev, Atanas; Cheong, Won-Sik; Gotchev, Atanas

    2012-06-01

    3D video content is captured and created mainly in high resolution targeting big cinema or home TV screens. For 3D mobile devices, equipped with small-size auto-stereoscopic displays, such content has to be properly repurposed, preferably in real-time. The repurposing requires not only spatial resizing but also properly maintaining the output stereo disparity, as it should deliver realistic, pleasant and harmless 3D perception. In this paper, we propose an approach to adapt the disparity range of the source video to the comfort disparity zone of the target display. To achieve this, we adapt the scale and the aspect ratio of the source video. We aim at maximizing the disparity range of the retargeted content within the comfort zone, and minimizing the letterboxing of the cropped content. The proposed algorithm consists of five stages. First, we analyse the display profile, which characterises what 3D content can be comfortably observed in the target display. Then, we perform fast disparity analysis of the input stereoscopic content. Instead of returning the dense disparity map, it returns an estimate of the disparity statistics (min, max, meanand variance) per frame. Additionally, we detect scene cuts, where sharp transitions in disparities occur. Based on the estimated input, and desired output disparity ranges, we derive the optimal cropping parameters and scale of the cropping window, which would yield the targeted disparity range and minimize the area of cropped and letterboxed content. Once the rescaling and cropping parameters are known, we perform resampling procedure using spline-based and perceptually optimized resampling (anti-aliasing) kernels, which have also a very efficient computational structure. Perceptual optimization is achieved through adjusting the cut-off frequency of the anti-aliasing filter with the throughput of the target display.

  1. Maximising information recovery from rank-order codes

    NASA Astrophysics Data System (ADS)

    Sen, B.; Furber, S.

    2007-04-01

    The central nervous system encodes information in sequences of asynchronously generated voltage spikes, but the precise details of this encoding are not well understood. Thorpe proposed rank-order codes as an explanation of the observed speed of information processing in the human visual system. The work described in this paper is inspired by the performance of SpikeNET, a biologically inspired neural architecture using rank-order codes for information processing, and is based on the retinal model developed by VanRullen and Thorpe. This model mimics retinal information processing by passing an input image through a bank of Difference of Gaussian (DoG) filters and then encoding the resulting coefficients in rank-order. To test the effectiveness of this encoding in capturing the information content of an image, the rank-order representation is decoded to reconstruct an image that can be compared with the original. The reconstruction uses a look-up table to infer the filter coefficients from their rank in the encoded image. Since the DoG filters are approximately orthogonal functions, they are treated as their own inverses in the reconstruction process. We obtained a quantitative measure of the perceptually important information retained in the reconstructed image relative to the original using a slightly modified version of an objective metric proposed by Petrovic. It is observed that around 75% of the perceptually important information is retained in the reconstruction. In the present work we reconstruct the input using a pseudo-inverse of the DoG filter-bank with the aim of improving the reconstruction and thereby extracting more information from the rank-order encoded stimulus. We observe that there is an increase of 10 - 15% in the information retrieved from a reconstructed stimulus as a result of inverting the filter-bank.

  2. The effect of visual scanning exercises integrated into physiotherapy in patients with unilateral spatial neglect poststroke: a matched-pair randomized control trial.

    PubMed

    van Wyk, Andoret; Eksteen, Carina A; Rheeder, Paul

    2014-01-01

    Unilateral spatial neglect (USN) is a visual-perceptual disorder that entails the inability to perceive and integrate stimuli on one side of the body, resulting in the neglect of one side of the body. Stroke patients with USN present with extensive functional disability and duration of therapy input. To determine the effect of saccadic eye movement training with visual scanning exercises (VSEs) integrated with task-specific activities on USN poststroke. A matched-pair randomized control trial was conducted. Subjects were matched according to their functional activity level and allocated to either a control (n = 12) or an experimental group (n = 12). All patients received task-specific activities for a 4-week intervention period. The experimental group received saccadic eye movement training with VSE integrated with task specific activities as an "add on" intervention. Assessments were conducted weekly over the intervention period. Statistical significant difference was noted on the King-Devick Test (P = .021), Star Cancellation Test (P = .016), and Barthel Index (P = .004). Intensive saccadic eye movement training with VSE integrated with task-specific activities has a significant effect on USN in patients poststroke. Results of this study are supported by findings from previously reviewed literature in the sense that the effect of saccadic eye movement training with VSE as an intervention approach has a significant effect on the visual perceptual processing of participants with USN poststroke. The significant improved visual perceptual processing translate to significantly better visual function and ability to perform activities of daily living following the stroke. © The Author(s) 2014.

  3. Speech perception at the interface of neurobiology and linguistics.

    PubMed

    Poeppel, David; Idsardi, William J; van Wassenhove, Virginie

    2008-03-12

    Speech perception consists of a set of computations that take continuously varying acoustic waveforms as input and generate discrete representations that make contact with the lexical representations stored in long-term memory as output. Because the perceptual objects that are recognized by the speech perception enter into subsequent linguistic computation, the format that is used for lexical representation and processing fundamentally constrains the speech perceptual processes. Consequently, theories of speech perception must, at some level, be tightly linked to theories of lexical representation. Minimally, speech perception must yield representations that smoothly and rapidly interface with stored lexical items. Adopting the perspective of Marr, we argue and provide neurobiological and psychophysical evidence for the following research programme. First, at the implementational level, speech perception is a multi-time resolution process, with perceptual analyses occurring concurrently on at least two time scales (approx. 20-80 ms, approx. 150-300 ms), commensurate with (sub)segmental and syllabic analyses, respectively. Second, at the algorithmic level, we suggest that perception proceeds on the basis of internal forward models, or uses an 'analysis-by-synthesis' approach. Third, at the computational level (in the sense of Marr), the theory of lexical representation that we adopt is principally informed by phonological research and assumes that words are represented in the mental lexicon in terms of sequences of discrete segments composed of distinctive features. One important goal of the research programme is to develop linking hypotheses between putative neurobiological primitives (e.g. temporal primitives) and those primitives derived from linguistic inquiry, to arrive ultimately at a biologically sensible and theoretically satisfying model of representation and computation in speech.

  4. Fast computation of derivative based sensitivities of PSHA models via algorithmic differentiation

    NASA Astrophysics Data System (ADS)

    Leövey, Hernan; Molkenthin, Christian; Scherbaum, Frank; Griewank, Andreas; Kuehn, Nicolas; Stafford, Peter

    2015-04-01

    Probabilistic seismic hazard analysis (PSHA) is the preferred tool for estimation of potential ground-shaking hazard due to future earthquakes at a site of interest. A modern PSHA represents a complex framework which combines different models with possible many inputs. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters and obtaining insight in the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs. Moreover, derivative based global sensitivity measures (Sobol' & Kucherenko '09) can be practically used to detect non-essential inputs of the models, thus restricting the focus of attention to a possible much smaller set of inputs. Nevertheless, obtaining first-order partial derivatives of complex models with traditional approaches can be very challenging, and usually increases the computation complexity linearly with the number of inputs appearing in the models. In this study we show how Algorithmic Differentiation (AD) tools can be used in a complex framework such as PSHA to successfully estimate derivative based sensitivities, as is the case in various other domains such as meteorology or aerodynamics, without no significant increase in the computation complexity required for the original computations. First we demonstrate the feasibility of the AD methodology by comparing AD derived sensitivities to analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. In a second step, we derive sensitivities via AD for a more complex PSHA study using a ground motion attenuation relation based on a stochastic method to simulate strong motion. The presented approach is general enough to accommodate more advanced PSHA studies of higher complexity.

  5. Constructional Apraxia in Older Patients with Brain Tumors: Considerations with an Up-To-Date Review of the Literature.

    PubMed

    Abete Fornara, Giorgia; Di Cristofori, Andrea; Bertani, Giulio Andrea; Carrabba, Giorgio; Zarino, Barbara

    2018-06-01

    Constructional apraxia (CA) is a neuropsychological impairment of either basic perceptual and motor abilities or executive functions, in the absence of any kind of motor or perceptual deficit. Considering patients with focal brain tumors, CA is common in left or right parietal and parieto-occipital lesions. In neuropsychology, the Rey-Osterrieth Complex Figure Test (ROCFT; or parallel forms) is commonly used for the assessment of CA. This study stems from a clinical observation of a difficulty with CA tests for the majority of older neurosurgical patients without occipitoparietal lesions. Patients were tested at 3 points: before surgery, 3 months after surgery, and 12 months after surgery. Thirty patients (15 meningiomas and 15 glioblastomas) were studied retrospectively. Older patients with focal brain lesions, regardless of the nature of the tumor, performed poorly at CA tests. More than 50% of patients obtained pathologic results at all 3 times considered. Our findings suggest that as CA complex tests involve multiple domains, poor results in copy task may reflect a global cognitive deficit of older patients with tumors, without a specific constructional praxis deficit. CA complex tests (such as the ROCFT) do not give significant informations about visuo-constructional abilities. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Perceiving and Acting on Complex Affordances: How Children and Adults Bicycle across Two Lanes of Opposing Traffic

    ERIC Educational Resources Information Center

    Grechkin, Timofey Y.; Chihak, Benjamin J.; Cremer, James F.; Kearney, Joseph K.; Plumert, Jodie M.

    2013-01-01

    This investigation examined how children and adults negotiate a challenging perceptual-motor problem with significant real-world implications--bicycling across two lanes of opposing traffic. Twelve- and 14-year-olds and adults rode a bicycling simulator through an immersive virtual environment. Participants crossed intersections with continuous…

  7. New Evidence on the Development of the Word "Big."

    ERIC Educational Resources Information Center

    Sena, Rhonda; Smith, Linda B.

    1990-01-01

    Results indicate that curvilinear trend in children's understanding of word "big" is not obtained in all stimulus contexts. This suggests that meaning and use of "big" is complex, and may not refer simply to larger objects in a set. Proposes that meaning of "big" constitutes a dynamic system driven by many perceptual,…

  8. The Relationship between Form and Function Level Receptive Prosodic Abilities in Autism

    ERIC Educational Resources Information Center

    Jarvinen-Pasley, Anna; Peppe, Susan; King-Smith, Gavin; Heaton, Pamela

    2008-01-01

    Prosody can be conceived as having form (auditory-perceptual characteristics) and function (pragmatic/linguistic meaning). No known studies have examined the relationship between form- and function-level prosodic skills in relation to the effects of stimulus length and/or complexity upon such abilities in autism. Research in this area is both…

  9. Conveying Clinical Reasoning Based on Visual Observation via Eye-Movement Modelling Examples

    ERIC Educational Resources Information Center

    Jarodzka, Halszka; Balslev, Thomas; Holmqvist, Kenneth; Nystrom, Marcus; Scheiter, Katharina; Gerjets, Peter; Eika, Berit

    2012-01-01

    Complex perceptual tasks, like clinical reasoning based on visual observations of patients, require not only conceptual knowledge about diagnostic classes but also the skills to visually search for symptoms and interpret these observations. However, medical education so far has focused very little on how visual observation skills can be…

  10. Laterality and Directional Preferences in Preschool Children.

    ERIC Educational Resources Information Center

    Tan, Lesley E.

    1982-01-01

    Directional preference for horizontal hand movements was investigated in 49 right- and 49 left-handed four-year-olds using three drawing tests. Directionality for more complex perceptual-motor tasks has a different basis than directionality for simple tasks; such directionality is established at a later age but only for the right hand. (Author/CM)

  11. Acquisition of a Static Human Target in Complex Terrain: Study of Perceptual Learning Utilizing Virtual Environments

    DTIC Science & Technology

    2008-09-01

    be definitively named as a cement truck, a shoe, an old style dual bell alarm clock, a cartoonish alligator, and what appears to be a raccoon rear...impinging stimulus. Internalized detectors develop…and increase the speed, accuracy, and general fluency with which the stimuli are processed

  12. Gymnastic Judges Benefit from Their Own Motor Experience as Gymnasts

    ERIC Educational Resources Information Center

    Pizzera, Alexandra

    2012-01-01

    Gymnastic judges have the difficult task of evaluating highly complex skills. My purpose in the current study was to examine evidence that judges use their sensorimotor experiences to enhance their perceptual judgments. In a video test, 58 judges rated 31 gymnasts performing a balance beam skill. I compared decision quality between judges who…

  13. Intact Spectral but Abnormal Temporal Processing of Auditory Stimuli in Autism

    ERIC Educational Resources Information Center

    Groen, Wouter B.; van Orsouw, Linda; ter Huurne, Niels; Swinkels, Sophie; van der Gaag, Rutger-Jan; Buitelaar, Jan K.; Zwiers, Marcel P.

    2009-01-01

    The perceptual pattern in autism has been related to either a specific localized processing deficit or a pathway-independent, complexity-specific anomaly. We examined auditory perception in autism using an auditory disembedding task that required spectral and temporal integration. 23 children with high-functioning-autism and 23 matched controls…

  14. The integration of temporally shifted visual feedback in a synchronization task: The role of perceptual stability in a visuo-proprioceptive conflict situation.

    PubMed

    Ceux, Tanja; Montagne, Gilles; Buekers, Martinus J

    2010-12-01

    The present study examined whether the beneficial role of coherently grouped visual motion structures for performing complex (interlimb) coordination patterns can be generalized to synchronization behavior in a visuo-proprioceptive conflict situation. To achieve this goal, 17 participants had to synchronize a self-moved circle, representing the arm movement, with a visual target signal corresponding to five temporally shifted visual feedback conditions (0%, 25%, 50%, 75%, and 100% of the target cycle duration) in three synchronization modes (in-phase, anti-phase, and intermediate). The results showed that the perception of a newly generated perceptual Gestalt between the visual feedback of the arm and the target signal facilitated the synchronization performance in the preferred in-phase synchronization mode in contrast to the less stable anti-phase and intermediate mode. Our findings suggest that the complexity of the synchronization mode defines to what extent the visual and/or proprioceptive information source affects the synchronization performance in the present unimanual synchronization task. Copyright © 2010 Elsevier B.V. All rights reserved.

  15. Observer weighting strategies in interaural time-difference discrimination and monaural level discrimination for a multi-tone complex

    NASA Astrophysics Data System (ADS)

    Dye, Raymond H.; Stellmack, Mark A.; Jurcin, Noah F.

    2005-05-01

    Two experiments measured listeners' abilities to weight information from different components in a complex of 553, 753, and 953 Hz. The goal was to determine whether or not the ability to adjust perceptual weights generalized across tasks. Weights were measured by binary logistic regression between stimulus values that were sampled from Gaussian distributions and listeners' responses. The first task was interaural time discrimination in which listeners judged the laterality of the target component. The second task was monaural level discrimination in which listeners indicated whether the level of the target component decreased or increased across two intervals. For both experiments, each of the three components served as the target. Ten listeners participated in both experiments. The results showed that those individuals who adjusted perceptual weights in the interaural time experiment could also do so in the monaural level discrimination task. The fact that the same individuals appeared to be analytic in both tasks is an indication that the weights measure the ability to attend to a particular region of the spectrum while ignoring other spectral regions. .

  16. Mathematical Modeling of Language Games

    NASA Astrophysics Data System (ADS)

    Loreto, Vittorio; Baronchelli, Andrea; Puglisi, Andrea

    In this chapter we explore several language games of increasing complexity. We first consider the so-called Naming Game, possibly the simplest example of the complex processes leading progressively to the establishment of human-like languages. In this framework, a globally shared vocabulary emerges as a result of local adjustments of individual word-meaning association. The emergence of a common vocabulary only represents a first stage while it is interesting to investigate the emergence of higher forms of agreement, e.g., compositionality, categories, syntactic or grammatical structures. As an example in this direction we consider the so-called Category Game. Here one focuses on the process by which a population of individuals manages to categorize a single perceptually continuous channel. The problem of the emergence of a discrete shared set of categories out of a continuous perceptual channel is a notoriously difficult problem relevant for color categorization, vowels formation, etc. The central result here is the emergence of a hierarchical category structure made of two distinct levels: a basic layer, responsible for fine discrimination of the environment, and a shared linguistic layer that groups together perceptions to guarantee communicative success.

  17. Qualitatively similar processing for own- and other-race faces: Evidence from efficiency and equivalent input noise.

    PubMed

    Shafai, Fakhri; Oruc, Ipek

    2018-02-01

    The other-race effect is the finding of diminished performance in recognition of other-race faces compared to those of own-race. It has been suggested that the other-race effect stems from specialized expert processes being tuned exclusively to own-race faces. In the present study, we measured recognition contrast thresholds for own- and other-race faces as well as houses for Caucasian observers. We have factored face recognition performance into two invariant aspects of visual function: efficiency, which is related to neural computations and processing demanded by the task, and equivalent input noise, related to signal degradation within the visual system. We hypothesized that if expert processes are available only to own-race faces, this should translate into substantially greater recognition efficiencies for own-race compared to other-race faces. Instead, we found similar recognition efficiencies for both own- and other-race faces. The other-race effect manifested as increased equivalent input noise. These results argue against qualitatively distinct perceptual processes. Instead they suggest that for Caucasian observers, similar neural computations underlie recognition of own- and other-race faces. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Development of an in-vehicle intersection collision countermeasure

    NASA Astrophysics Data System (ADS)

    Pierowicz, John A.

    1997-02-01

    Intersection collisions constitute approximately twenty-six percent of all accidents in the United States. Because of their complexity, and demands on the perceptual and decision making abilities of the driver, intersections present an increased risk of collisions between automobiles. This situation provides an opportunity to apply advanced sensor and processing capabilities to prevent these collisions. A program to determine the characteristics of intersection collisions and identify potential countermeasures will be described. This program, sponsored by the National Highway Traffic Safety Administration, utilized accident data to develop a taxonomy of intersection crashes. This taxonomy was used to develop a concept for an intersection collision avoidance countermeasure. The concept utilizes in-vehicle position, dynamic status, and millimeter wave radar system and an in-vehicle computer system to provide inputs to an intersection collision avoidance algorithm. Detection of potential violation of traffic control device, or proceeding into the intersection with inadequate gap will lead to the presentation of a warning to the driver. These warnings are presented to the driver primarily via a head-up display and haptic feedback. Roadside to vehicle communication provides information regarding phased traffic signal information. Active control of the vehicle's brake and steering systems are described. Progress in the development of the systems will be presented along with the schedule of future activities.

  19. Comparison of auditory stream segregation in sighted and early blind individuals.

    PubMed

    Boroujeni, Fatemeh Moghadasi; Heidari, Fatemeh; Rouzbahani, Masoumeh; Kamali, Mohammad

    2017-01-18

    An important characteristic of the auditory system is the capacity to analyze complex sounds and make decisions on the source of the constituent parts of these sounds. Blind individuals compensate for the lack of visual information by an increase input from other sensory modalities, including increased auditory information. The purpose of the current study was to compare the fission boundary (FB) threshold of sighted and early blind individuals through spectral aspects using a psychoacoustic auditory stream segregation (ASS) test. This study was conducted on 16 sighted and 16 early blind adult individuals. The applied stimuli were presented sequentially as the pure tones A and B and as a triplet ABA-ABA pattern at the intensity of 40dBSL. The A tone frequency was selected as the basis at values of 500, 1000, and 2000Hz. The B tone was presented with the difference of a 4-100% above the basis tone frequency. Blind individuals had significantly lower FB thresholds than sighted people. FB was independent of the frequency of the tone A when expressed as the difference in the number of equivalent rectangular bandwidths (ERBs). Early blindness may increase perceptual separation of the acoustic stimuli to form accurate representations of the world. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. A closed-loop neurobotic system for fine touch sensing

    NASA Astrophysics Data System (ADS)

    Bologna, L. L.; Pinoteau, J.; Passot, J.-B.; Garrido, J. A.; Vogel, J.; Ros Vidal, E.; Arleo, A.

    2013-08-01

    Objective. Fine touch sensing relies on peripheral-to-central neurotransmission of somesthetic percepts, as well as on active motion policies shaping tactile exploration. This paper presents a novel neuroengineering framework for robotic applications based on the multistage processing of fine tactile information in the closed action-perception loop. Approach. The integrated system modules focus on (i) neural coding principles of spatiotemporal spiking patterns at the periphery of the somatosensory pathway, (ii) probabilistic decoding mechanisms mediating cortical-like tactile recognition and (iii) decision-making and low-level motor adaptation underlying active touch sensing. We probed the resulting neural architecture through a Braille reading task. Main results. Our results on the peripheral encoding of primary contact features are consistent with experimental data on human slow-adapting type I mechanoreceptors. They also suggest second-order processing by cuneate neurons may resolve perceptual ambiguities, contributing to a fast and highly performing online discrimination of Braille inputs by a downstream probabilistic decoder. The implemented multilevel adaptive control provides robustness to motion inaccuracy, while making the number of finger accelerations covariate with Braille character complexity. The resulting modulation of fingertip kinematics is coherent with that observed in human Braille readers. Significance. This work provides a basis for the design and implementation of modular neuromimetic systems for fine touch discrimination in robotics.

  1. Microgravity vestibular investigations (10-IML-1)

    NASA Technical Reports Server (NTRS)

    Reschke, Millard F.

    1992-01-01

    Our perception of how we are oriented in space is dependent on the interaction of virtually every sensory system. For example, to move about in our environment we integrate inputs in our brain from visual, haptic (kinesthetic, proprioceptive, and cutaneous), auditory systems, and labyrinths. In addition to this multimodal system for orientation, our expectations about the direction and speed of our chosen movement are also important. Changes in our environment and the way we interact with the new stimuli will result in a different interpretation by the nervous system of the incoming sensory information. We will adapt to the change in appropriate ways. Because our orientation system is adaptable and complex, it is often difficult to trace a response or change in behavior to any one source of information in this synergistic orientation system. However, with a carefully designed investigation, it is possible to measure signals at the appropriate level of response (both electrophysiological and perceptual) and determine the effect that stimulus rearrangement has on our sense of orientation. The environment of orbital flight represents the stimulus arrangement that is our immediate concern. The Microgravity Vestibular Investigations (MVI) represent a group of experiments designed to investigate the effects of orbital flight and a return to Earth on our orientation system.

  2. Connectivity in the human brain dissociates entropy and complexity of auditory inputs☆

    PubMed Central

    Nastase, Samuel A.; Iacovella, Vittorio; Davis, Ben; Hasson, Uri

    2015-01-01

    Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. PMID:25536493

  3. Effect of tDCS on task relevant and irrelevant perceptual learning of complex objects.

    PubMed

    Van Meel, Chayenne; Daniels, Nicky; de Beeck, Hans Op; Baeck, Annelies

    2016-01-01

    During perceptual learning the visual representations in the brain are altered, but these changes' causal role has not yet been fully characterized. We used transcranial direct current stimulation (tDCS) to investigate the role of higher visual regions in lateral occipital cortex (LO) in perceptual learning with complex objects. We also investigated whether object learning is dependent on the relevance of the objects for the learning task. Participants were trained in two tasks: object recognition using a backward masking paradigm and an orientation judgment task. During both tasks, an object with a red line on top of it were presented in each trial. The crucial difference between both tasks was the relevance of the object: the object was relevant for the object recognition task, but not for the orientation judgment task. During training, half of the participants received anodal tDCS stimulation targeted at the lateral occipital cortex (LO). Afterwards, participants were tested on how well they recognized the trained objects, the irrelevant objects presented during the orientation judgment task and a set of completely new objects. Participants stimulated with tDCS during training showed larger improvements of performance compared to participants in the sham condition. No learning effect was found for the objects presented during the orientation judgment task. To conclude, this study suggests a causal role of LO in relevant object learning, but given the rather low spatial resolution of tDCS, more research on the specificity of this effect is needed. Further, mere exposure is not sufficient to train object recognition in our paradigm.

  4. Complexity and non-commutativity of learning operations on graphs.

    PubMed

    Atmanspacher, Harald; Filk, Thomas

    2006-07-01

    We present results from numerical studies of supervised learning operations in small recurrent networks considered as graphs, leading from a given set of input conditions to predetermined outputs. Graphs that have optimized their output for particular inputs with respect to predetermined outputs are asymptotically stable and can be characterized by attractors, which form a representation space for an associative multiplicative structure of input operations. As the mapping from a series of inputs onto a series of such attractors generally depends on the sequence of inputs, this structure is generally non-commutative. Moreover, the size of the set of attractors, indicating the complexity of learning, is found to behave non-monotonically as learning proceeds. A tentative relation between this complexity and the notion of pragmatic information is indicated.

  5. User's guide for a computer program for calculating the zero-lift wave drag of complex aircraft configurations

    NASA Technical Reports Server (NTRS)

    Craidon, C. B.

    1983-01-01

    A computer program was developed to extend the geometry input capabilities of previous versions of a supersonic zero lift wave drag computer program. The arbitrary geometry input description is flexible enough to describe almost any complex aircraft concept, so that highly accurate wave drag analysis can now be performed because complex geometries can be represented accurately and do not have to be modified to meet the requirements of a restricted input format.

  6. Evidence of different underlying processes in pattern recall and decision-making.

    PubMed

    Gorman, Adam D; Abernethy, Bruce; Farrow, Damian

    2015-01-01

    The visual search characteristics of expert and novice basketball players were recorded during pattern recall and decision-making tasks to determine whether the two tasks shared common visual-perceptual processing strategies. The order in which participants entered the pattern elements in the recall task was also analysed to further examine the nature of the visual-perceptual strategies and the relative emphasis placed upon particular pattern features. The experts demonstrated superior performance across the recall and decision-making tasks [see also Gorman, A. D., Abernethy, B., & Farrow, D. (2012). Classical pattern recall tests and the prospective nature of expert performance. The Quarterly Journal of Experimental Psychology, 65, 1151-1160; Gorman, A. D., Abernethy, B., & Farrow, D. (2013a). Is the relationship between pattern recall and decision-making influenced by anticipatory recall? The Quarterly Journal of Experimental Psychology, 66, 2219-2236)] but a number of significant differences in the visual search data highlighted disparities in the processing strategies, suggesting that recall skill may utilize different underlying visual-perceptual processes than those required for accurate decision-making performance in the natural setting. Performance on the recall task was characterized by a proximal-to-distal order of entry of the pattern elements with participants tending to enter the players located closest to the ball carrier earlier than those located more distal to the ball carrier. The results provide further evidence of the underlying perceptual processes employed by experts when extracting visual information from complex and dynamic patterns.

  7. DriveID: safety innovation through individuation.

    PubMed

    Sawyer, Ben; Teo, Grace; Mouloua, Mustapha

    2012-01-01

    The driving task is highly complex and places considerable perceptual, physical and cognitive demands on the driver. As driving is fundamentally an information processing activity, distracted or impaired drivers have diminished safety margins compared with non- distracted drivers (Hancock and Parasuraman, 1992; TRB 1998 a & b). This competition for sensory and decision making capacities can lead to failures that cost lives. Some groups, teens and elderly drivers for example, have patterns of systematically poor perceptual, physical and cognitive performance while driving. Although there are technologies developed to aid these different drivers, these systems are often misused and underutilized. The DriveID project aims to design and develop a passive, automated face identification system capable of robustly identifying the driver of the vehicle, retrieve a stored profile, and intelligently prescribing specific accident prevention systems and driving environment customizations.

  8. Memory systems, processes, and tasks: taxonomic clarification via factor analysis.

    PubMed

    Bruss, Peter J; Mitchell, David B

    2009-01-01

    The nature of various memory systems was examined using factor analysis. We reanalyzed data from 11 memory tasks previously reported in Mitchell and Bruss (2003). Four well-defined factors emerged, closely resembling episodic and semantic memory and conceptual and perceptual implicit memory, in line with both memory systems and transfer-appropriate processing accounts. To explore taxonomic issues, we ran separate analyses on the implicit tasks. Using a cross-format manipulation (pictures vs. words), we identified 3 prototypical tasks. Word fragment completion and picture fragment identification tasks were "factor pure," tapping perceptual processes uniquely. Category exemplar generation revealed its conceptual nature, yielding both cross-format priming and a picture superiority effect. In contrast, word stem completion and picture naming were more complex, revealing attributes of both processes.

  9. Perceptual and Physiological Responses to Jackson Pollock's Fractals

    PubMed Central

    Taylor, Richard P.; Spehar, Branka; Van Donkelaar, Paul; Hagerhall, Caroline M.

    2011-01-01

    Fractals have been very successful in quantifying the visual complexity exhibited by many natural patterns, and have captured the imagination of scientists and artists alike. Our research has shown that the poured patterns of the American abstract painter Jackson Pollock are also fractal. This discovery raises an intriguing possibility – are the visual characteristics of fractals responsible for the long-term appeal of Pollock's work? To address this question, we have conducted 10 years of scientific investigation of human response to fractals and here we present, for the first time, a review of this research that examines the inter-relationship between the various results. The investigations include eye tracking, visual preference, skin conductance, and EEG measurement techniques. We discuss the artistic implications of the positive perceptual and physiological responses to fractal patterns. PMID:21734876

  10. Subjective evaluation with FAA criteria: A multidimensional scaling approach. [ground track control management

    NASA Technical Reports Server (NTRS)

    Kreifeldt, J. G.; Parkin, L.; Wempe, T. E.; Huff, E. F.

    1975-01-01

    Perceived orderliness in the ground tracks of five A/C during their simulated flights was studied. Dynamically developing ground tracks for five A/C from 21 separate runs were reproduced from computer storage and displayed on CRTS to professional pilots and controllers for their evaluations and preferences under several criteria. The ground tracks were developed in 20 seconds as opposed to the 5 minutes of simulated flight using speedup techniques for display. Metric and nonmetric multidimensional scaling techniques are being used to analyze the subjective responses in an effort to: (1) determine the meaningfulness of basing decisions on such complex subjective criteria; (2) compare pilot/controller perceptual spaces; (3) determine the dimensionality of the subjects' perceptual spaces; and thereby (4) determine objective measures suitable for comparing alternative traffic management simulations.

  11. Depth image enhancement using perceptual texture priors

    NASA Astrophysics Data System (ADS)

    Bang, Duhyeon; Shim, Hyunjung

    2015-03-01

    A depth camera is widely used in various applications because it provides a depth image of the scene in real time. However, due to the limited power consumption, the depth camera presents severe noises, incapable of providing the high quality 3D data. Although the smoothness prior is often employed to subside the depth noise, it discards the geometric details so to degrade the distance resolution and hinder achieving the realism in 3D contents. In this paper, we propose a perceptual-based depth image enhancement technique that automatically recovers the depth details of various textures, using a statistical framework inspired by human mechanism of perceiving surface details by texture priors. We construct the database composed of the high quality normals. Based on the recent studies in human visual perception (HVP), we select the pattern density as a primary feature to classify textures. Upon the classification results, we match and substitute the noisy input normals with high quality normals in the database. As a result, our method provides the high quality depth image preserving the surface details. We expect that our work is effective to enhance the details of depth image from 3D sensors and to provide a high-fidelity virtual reality experience.

  12. When cognition kicks in: working memory and speech understanding in noise.

    PubMed

    Rönnberg, Jerker; Rudner, Mary; Lunner, Thomas; Zekveld, Adriana A

    2010-01-01

    Perceptual load and cognitive load can be separately manipulated and dissociated in their effects on speech understanding in noise. The Ease of Language Understanding model assumes a theoretical position where perceptual task characteristics interact with the individual's implicit capacities to extract the phonological elements of speech. Phonological precision and speed of lexical access are important determinants for listening in adverse conditions. If there are mismatches between the phonological elements perceived and phonological representations in long-term memory, explicit working memory (WM)-related capacities will be continually invoked to reconstruct and infer the contents of the ongoing discourse. Whether this induces a high cognitive load or not will in turn depend on the individual's storage and processing capacities in WM. Data suggest that modulated noise maskers may serve as triggers for speech maskers and therefore induce a WM, explicit mode of processing. Individuals with high WM capacity benefit more than low WM-capacity individuals from fast amplitude compression at low or negative input speech-to-noise ratios. The general conclusion is that there is an overarching interaction between the focal purpose of processing in the primary listening task and the extent to which a secondary, distracting task taps into these processes.

  13. Relative saliency in change signals affects perceptual comparison and decision processes in change detection.

    PubMed

    Yang, Cheng-Ta

    2011-12-01

    Change detection requires perceptual comparison and decision processes on different features of multiattribute objects. How relative salience between two feature-changes influences the processes has not been addressed. This study used the systems factorial technology to investigate the processes when detecting changes in a Gabor patch with visual inputs from orientation and spatial frequency channels. Two feature-changes were equally salient in Experiment 1, but a frequency-change was more salient than an orientation-change in Experiment 2. Results showed that all four observers adopted parallel self-terminating processing with limited- to unlimited-capacity processing in Experiment 1. In Experiment 2, one observer used parallel self-terminating processing with unlimited-capacity processing, and the others adopted serial self-terminating processing with limited- to unlimited-capacity processing to detect changes. Postexperimental interview revealed that subjective utility of feature information underlay the adoption of a decision strategy. These results highlight that observers alter decision strategies in change detection depending on the relative saliency in change signals, with relative saliency being determined by both physical salience and subjective weight of feature information. When relative salience exists, individual differences in the process characteristics emerge.

  14. Perceptuo-motor compatibility governs multisensory integration in bimanual coordination dynamics.

    PubMed

    Zelic, Gregory; Mottet, Denis; Lagarde, Julien

    2016-02-01

    The brain has the remarkable ability to bind together inputs from different sensory origin into a coherent percept. Behavioral benefits can result from such ability, e.g., a person typically responds faster and more accurately to cross-modal stimuli than to unimodal stimuli. To date, it is, however, largely unknown whether such multisensory benefits, shown for discrete reactive behaviors, generalize to the continuous coordination of movements. The present study addressed multisensory integration from the perspective of bimanual coordination dynamics, where the perceptual activity no longer triggers a single response but continuously guides the motor action. The task consisted in coordinating anti-symmetrically the continuous flexion-extension of the index fingers, while synchronizing with an external pacer. Three different configurations of metronome were tested, for which we examined whether a cross-modal pacing (audio-tactile beats) improved the stability of the coordination in comparison with unimodal pacing condition (auditory or tactile beats). We found a more stable bimanual coordination for cross-modal pacing, but only when the metronome configuration directly matched the anti-symmetric coordination pattern. We conclude that multisensory integration can benefit the continuous coordination of movements; however, this is constrained by whether the perceptual and motor activities match in space and time.

  15. Subthalamic nucleus stimulation impairs emotional conflict adaptation in Parkinson's disease.

    PubMed

    Irmen, Friederike; Huebl, Julius; Schroll, Henning; Brücke, Christof; Schneider, Gerd-Helge; Hamker, Fred H; Kühn, Andrea A

    2017-10-01

    The subthalamic nucleus (STN) occupies a strategic position in the motor network, slowing down responses in situations with conflicting perceptual input. Recent evidence suggests a role of the STN in emotion processing through strong connections with emotion recognition structures. As deep brain stimulation (DBS) of the STN in patients with Parkinson's disease (PD) inhibits monitoring of perceptual and value-based conflict, STN DBS may also interfere with emotional conflict processing. To assess a possible interference of STN DBS with emotional conflict processing, we used an emotional Stroop paradigm. Subjects categorized face stimuli according to their emotional expression while ignoring emotionally congruent or incongruent superimposed word labels. Eleven PD patients ON and OFF STN DBS and eleven age-matched healthy subjects conducted the task. We found conflict-induced response slowing in healthy controls and PD patients OFF DBS, but not ON DBS, suggesting STN DBS to decrease adaptation to within-trial conflict. OFF DBS, patients showed more conflict-induced slowing for negative conflict stimuli, which was diminished by STN DBS. Computational modelling of STN influence on conflict adaptation disclosed DBS to interfere via increased baseline activity. © The Author (2017). Published by Oxford University Press.

  16. Differential cognitive and perceptual correlates of print reading versus braille reading.

    PubMed

    Veispak, Anneli; Boets, Bart; Ghesquière, Pol

    2013-01-01

    The relations between reading, auditory, speech, phonological and tactile spatial processing are investigated in a Dutch speaking sample of blind braille readers as compared to sighted print readers. Performance is assessed in blind and sighted children and adults. Regarding phonological ability, braille readers perform equally well compared to print readers on phonological awareness, better on verbal short-term memory and significantly worse on lexical retrieval. The groups do not differ on speech perception or auditory processing. Braille readers, however, have more sensitive fingers than print readers. Investigation of the relations between these cognitive and perceptual skills and reading performance indicates that in the group of braille readers auditory temporal processing has a longer lasting and stronger impact not only on phonological abilities, which have to satisfy the high processing demands of the strictly serial language input, but also directly on the reading ability itself. Print readers switch between grapho-phonological and lexical reading modes depending on the familiarity of the items. Furthermore, the auditory temporal processing and speech perception, which were substantially interrelated with phonological processing, had no direct associations with print reading measures. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. STDP in lateral connections creates category-based perceptual cycles for invariance learning with multiple stimuli.

    PubMed

    Evans, Benjamin D; Stringer, Simon M

    2015-04-01

    Learning to recognise objects and faces is an important and challenging problem tackled by the primate ventral visual system. One major difficulty lies in recognising an object despite profound differences in the retinal images it projects, due to changes in view, scale, position and other identity-preserving transformations. Several models of the ventral visual system have been successful in coping with these issues, but have typically been privileged by exposure to only one object at a time. In natural scenes, however, the challenges of object recognition are typically further compounded by the presence of several objects which should be perceived as distinct entities. In the present work, we explore one possible mechanism by which the visual system may overcome these two difficulties simultaneously, through segmenting unseen (artificial) stimuli using information about their category encoded in plastic lateral connections. We demonstrate that these experience-guided lateral interactions robustly organise input representations into perceptual cycles, allowing feed-forward connections trained with spike-timing-dependent plasticity to form independent, translation-invariant output representations. We present these simulations as a functional explanation for the role of plasticity in the lateral connectivity of visual cortex.

  18. Suppressive mechanisms in visual motion processing: from perception to intelligence

    PubMed Central

    Tadin, Duje

    2015-01-01

    Perception operates on an immense amount of incoming information that greatly exceeds the brain's processing capacity. Because of this fundamental limitation, the ability to suppress irrelevant information is a key determinant of perceptual efficiency. Here, I will review a series of studies investigating suppressive mechanisms in visual motion processing, namely perceptual suppression of large, background-like motions. These spatial suppression mechanisms are adaptive, operating only when sensory inputs are sufficiently robust to guarantee visibility. Converging correlational and causal evidence links these behavioral results with inhibitory center-surround mechanisms, namely those in cortical area MT. Spatial suppression is abnormally weak in several special populations, including the elderly and those with schizophrenia—a deficit that is evidenced by better-than-normal direction discriminations of large moving stimuli. Theoretical work shows that this abnormal weakening of spatial suppression should result in motion segregation deficits, but direct behavioral support of this hypothesis is lacking. Finally, I will argue that the ability to suppress information is a fundamental neural process that applies not only to perception but also to cognition in general. Supporting this argument, I will discuss recent research that shows individual differences in spatial suppression of motion signals strongly predict individual variations in IQ scores. PMID:26299386

  19. Common neural systems associated with the recognition of famous faces and names: An event-related fMRI study

    PubMed Central

    Nielson, Kristy A.; Seidenberg, Michael; Woodard, John L.; Durgerian, Sally; Zhang, Qi; Gross, William L.; Gander, Amelia; Guidotti, Leslie M.; Antuono, Piero; Rao, Stephen M.

    2010-01-01

    Person recognition can be accomplished through several modalities (face, name, voice). Lesion, neurophysiology and neuroimaging studies have been conducted in an attempt to determine the similarities and differences in the neural networks associated with person identity via different modality inputs. The current study used event-related functional-MRI in 17 healthy participants to directly compare activation in response to randomly presented famous and non-famous names and faces (25 stimuli in each of the four categories). Findings indicated distinct areas of activation that differed for faces and names in regions typically associated with pre-semantic perceptual processes. In contrast, overlapping brain regions were activated in areas associated with the retrieval of biographical knowledge and associated social affective features. Specifically, activation for famous faces was primarily right lateralized and famous names were left lateralized. However, for both stimuli, similar areas of bilateral activity were observed in the early phases of perceptual processing. Activation for fame, irrespective of stimulus modality, activated an extensive left hemisphere network, with bilateral activity observed in the hippocampi, posterior cingulate, and middle temporal gyri. Findings are discussed within the framework of recent proposals concerning the neural network of person identification. PMID:20167415

  20. Family medicine outpatient encounters are more complex than those of cardiology and psychiatry.

    PubMed

    Katerndahl, David; Wood, Robert; Jaén, Carlos Roberto

    2011-01-01

    comparison studies suggest that the guideline-concordant care provided for specific medical conditions is less optimal in primary care compared with cardiology and psychiatry settings. The purpose of this study is to estimate the relative complexity of patient encounters in general/family practice, cardiology, and psychiatry settings. secondary analysis of the 2000 National Ambulatory Medical Care Survey data for ambulatory patients seen in general/family practice, cardiology, and psychiatry settings was performed. The complexity for each variable was estimated as the quantity weighted by variability and diversity. there is minimal difference in the unadjusted input and total encounter complexity of general/family practice and cardiology; psychiatry's input is less complex. Cardiology encounters involved more input quantitatively, but the diversity of general/family practice input eliminated the difference. Cardiology also involved more complex output. However, when the duration of visit is factored in, the complexity of care provided per hour in general/family practice is 33% more relative to cardiology and 5 times more relative to psychiatry. care during family physician visits is more complex per hour than the care during visits to cardiologists or psychiatrists. This may account for a lower rate of completion of process items measured for quality of care.

  1. NEUROPSYCHOLOGICAL REMEDIATION OF HYPERACTIVE CHILDREN

    PubMed Central

    Agarwal, Neena; Rao, Shobini L.

    1997-01-01

    Hyperkinesis is associated with deficits of attention (poor allocation of attention resources, susceptibility to interference and perseveration); vigilance and perceptual sensitivity. Three boys aged 7-8 years with simple hyperkinesis were given cognitive tasks to improve the above functions in daily one hour sessions for a month. The children improved significantly in the above functions and behaviour. Three other children aged 5-8 years with simple hyperkinesis who were on medication improved only slightly in their behaviour during this period. Behavioural intervention and parental counselling were additional inputs to the children in both groups. Neuropsychological remediation combined with parental counselling and behavioural intervention shows promise in treating hyperactive children. PMID:21584098

  2. All words are not created equal: Expectations about word length guide infant statistical learning

    PubMed Central

    Lew-Williams, Casey; Saffran, Jenny R.

    2011-01-01

    Infants have been described as ‘statistical learners’ capable of extracting structure (such as words) from patterned input (such as language). Here, we investigated whether prior knowledge influences how infants track transitional probabilities in word segmentation tasks. Are infants biased by prior experience when engaging in sequential statistical learning? In a laboratory simulation of learning across time, we exposed 9- and 10-month-old infants to a list of either bisyllabic or trisyllabic nonsense words, followed by a pause-free speech stream composed of a different set of bisyllabic or trisyllabic nonsense words. Listening times revealed successful segmentation of words from fluent speech only when words were uniformly bisyllabic or trisyllabic throughout both phases of the experiment. Hearing trisyllabic words during the pre-exposure phase derailed infants’ abilities to segment speech into bisyllabic words, and vice versa. We conclude that prior knowledge about word length equips infants with perceptual expectations that facilitate efficient processing of subsequent language input. PMID:22088408

  3. Humans treat unreliable filled-in percepts as more real than veridical ones

    PubMed Central

    Ehinger, Benedikt V; Häusser, Katja; Ossandón, José P; König, Peter

    2017-01-01

    Humans often evaluate sensory signals according to their reliability for optimal decision-making. However, how do we evaluate percepts generated in the absence of direct input that are, therefore, completely unreliable? Here, we utilize the phenomenon of filling-in occurring at the physiological blind-spots to compare partially inferred and veridical percepts. Subjects chose between stimuli that elicit filling-in, and perceptually equivalent ones presented outside the blind-spots, looking for a Gabor stimulus without a small orthogonal inset. In ambiguous conditions, when the stimuli were physically identical and the inset was absent in both, subjects behaved opposite to optimal, preferring the blind-spot stimulus as the better example of a collinear stimulus, even though no relevant veridical information was available. Thus, a percept that is partially inferred is paradoxically considered more reliable than a percept based on external input. In other words: Humans treat filled-in inferred percepts as more real than veridical ones. DOI: http://dx.doi.org/10.7554/eLife.21761.001 PMID:28506359

  4. Does hearing two dialects at different times help infants learn dialect-specific rules?

    PubMed Central

    Gonzales, Kalim; Gerken, LouAnn; Gómez, Rebecca L.

    2015-01-01

    Infants might be better at teasing apart dialects with different language rules when hearing the dialects at different times, since language learners do not always combine input heard at different times. However, no previous research has independently varied the temporal distribution of conflicting language input. Twelve-month-olds heard two artificial language streams representing different dialects—a “pure stream” whose sentences adhered to abstract grammar rules like aX bY, and a “mixed stream” wherein any a- or b-word could precede any X- or Y-word. Infants were then tested for generalization of the pure stream’s rules to novel sentences. Supporting our hypothesis, infants showed generalization when the two streams’ sentences alternated in minutes-long intervals without any perceptually salient change across streams (Experiment 2), but not when all sentences from these same streams were randomly interleaved (Experiment 3). Results are interpreted in light of temporal context effects in word learning. PMID:25880342

  5. Temporal Context in Speech Processing and Attentional Stream Selection: A Behavioral and Neural perspective

    PubMed Central

    Zion Golumbic, Elana M.; Poeppel, David; Schroeder, Charles E.

    2012-01-01

    The human capacity for processing speech is remarkable, especially given that information in speech unfolds over multiple time scales concurrently. Similarly notable is our ability to filter out of extraneous sounds and focus our attention on one conversation, epitomized by the ‘Cocktail Party’ effect. Yet, the neural mechanisms underlying on-line speech decoding and attentional stream selection are not well understood. We review findings from behavioral and neurophysiological investigations that underscore the importance of the temporal structure of speech for achieving these perceptual feats. We discuss the hypothesis that entrainment of ambient neuronal oscillations to speech’s temporal structure, across multiple time-scales, serves to facilitate its decoding and underlies the selection of an attended speech stream over other competing input. In this regard, speech decoding and attentional stream selection are examples of ‘active sensing’, emphasizing an interaction between proactive and predictive top-down modulation of neuronal dynamics and bottom-up sensory input. PMID:22285024

  6. Stimulus and response conflict processing during perceptual decision making.

    PubMed

    Wendelken, Carter; Ditterich, Jochen; Bunge, Silvia A; Carter, Cameron S

    2009-12-01

    Encoding and dealing with conflicting information is essential for successful decision making in a complex environment. In the present fMRI study, stimulus conflict and response conflict are contrasted in the context of a perceptual decision-making dot-motion discrimination task. Stimulus conflict was manipulated by varying dot-motion coherence along task-relevant and task-irrelevant dimensions. Response conflict was manipulated by varying whether or not competing stimulus dimensions provided evidence for the same or different responses. The right inferior frontal gyrus was involved specifically in the resolution of stimulus conflict, whereas the dorsal anterior cingulate cortex was shown to be sensitive to response conflict. Additionally, two regions that have been linked to perceptual decision making with dot-motion stimuli in monkey physiology studies were differentially engaged by stimulus conflict and response conflict. The middle temporal area, previously linked to processing of motion, was strongly affected by the presence of stimulus conflict. On the other hand, the superior parietal lobe, previously associated with accumulation of evidence for a response, was affected by the presence of response conflict. These results shed light on the neural mechanisms that support decision making in the presence of conflict, a cognitive operation fundamental to both basic survival and high-level cognition.

  7. Color filter array design based on a human visual model

    NASA Astrophysics Data System (ADS)

    Parmar, Manu; Reeves, Stanley J.

    2004-05-01

    To reduce cost and complexity associated with registering multiple color sensors, most consumer digital color cameras employ a single sensor. A mosaic of color filters is overlaid on a sensor array such that only one color channel is sampled per pixel location. The missing color values must be reconstructed from available data before the image is displayed. The quality of the reconstructed image depends fundamentally on the array pattern and the reconstruction technique. We present a design method for color filter array patterns that use red, green, and blue color channels in an RGB array. A model of the human visual response for luminance and opponent chrominance channels is used to characterize the perceptual error between a fully sampled and a reconstructed sparsely-sampled image. Demosaicking is accomplished using Wiener reconstruction. To ensure that the error criterion reflects perceptual effects, reconstruction is done in a perceptually uniform color space. A sequential backward selection algorithm is used to optimize the error criterion to obtain the sampling arrangement. Two different types of array patterns are designed: non-periodic and periodic arrays. The resulting array patterns outperform commonly used color filter arrays in terms of the error criterion.

  8. Display device-adapted video quality-of-experience assessment

    NASA Astrophysics Data System (ADS)

    Rehman, Abdul; Zeng, Kai; Wang, Zhou

    2015-03-01

    Today's viewers consume video content from a variety of connected devices, including smart phones, tablets, notebooks, TVs, and PCs. This imposes significant challenges for managing video traffic efficiently to ensure an acceptable quality-of-experience (QoE) for the end users as the perceptual quality of video content strongly depends on the properties of the display device and the viewing conditions. State-of-the-art full-reference objective video quality assessment algorithms do not take into account the combined impact of display device properties, viewing conditions, and video resolution while performing video quality assessment. We performed a subjective study in order to understand the impact of aforementioned factors on perceptual video QoE. We also propose a full reference video QoE measure, named SSIMplus, that provides real-time prediction of the perceptual quality of a video based on human visual system behaviors, video content characteristics (such as spatial and temporal complexity, and video resolution), display device properties (such as screen size, resolution, and brightness), and viewing conditions (such as viewing distance and angle). Experimental results have shown that the proposed algorithm outperforms state-of-the-art video quality measures in terms of accuracy and speed.

  9. An unexplained three-dimensional percept emerging from a bundle of lines.

    PubMed

    Altschuler, Eric L; Huang, Abigail E; Kim, Hee J; Battaglini, Luca; Roncato, Sergio

    2017-10-01

    Perceptual grouping has been extensively studied, but some areas are still unexplored-in particular, the figural organizations that emerge when bundles of intersecting lines are drawn. Here, we will describe some figural organizations that emerge after the superimposition of bundles of lines forming the profile of regular triangular waves. By manipulating the lines' jaggedness and junction geometry (regular or irregular X junction) we could generate the following organizations: (a) a grid, or a figural configuration in which both the lines and closed contours are perceived, (b) a figure-ground organization composed of figures separated by portions of the background, and (c) a corrugated surface appearing as a multifaceted polyhedral shell crossed by ridges and valleys. An experiment was conducted with the aim at testing the role of the good-continuation and closure Gestalt factors. Good continuation prevails when the lines are straight or close to straightness, but its role is questionable in the appearance of a corrugated surface. This perceptual organization occurs despite the violation of the good-continuation rule and consists of a structure of such complexity so as to challenge algorithms of computer vision and stimulate a deeper understanding of the perceptual interpretation of groups of lines.

  10. Expertise facilitates the transfer of anticipation skill across domains.

    PubMed

    Rosalie, Simon M; Müller, Sean

    2014-02-01

    It is unclear whether perceptual-motor skill transfer is based upon similarity between the learning and transfer domains per identical elements theory, or facilitated by an understanding of underlying principles in accordance with general principle theory. Here, the predictions of identical elements theory, general principle theory, and aspects of a recently proposed model for the transfer of perceptual-motor skill with respect to expertise in the learning and transfer domains are examined. The capabilities of expert karate athletes, near-expert karate athletes, and novices to anticipate and respond to stimulus skills derived from taekwondo and Australian football were investigated in ecologically valid contexts using an in situ temporal occlusion paradigm and complex whole-body perceptual-motor skills. Results indicated that the karate experts and near-experts are as capable of using visual information to anticipate and guide motor skill responses as domain experts and near-experts in the taekwondo transfer domain, but only karate experts could perform like domain experts in the Australian football transfer domain. Findings suggest that transfer of anticipation skill is based upon expertise and an understanding of principles but may be supplemented by similarities that exist between the stimulus and response elements of the learning and transfer domains.

  11. Considerations for the future development of virtual technology as a rehabilitation tool

    PubMed Central

    Kenyon, Robert V; Leigh, Jason; Keshner, Emily A

    2004-01-01

    Background Virtual environments (VE) are a powerful tool for various forms of rehabilitation. Coupling VE with high-speed networking [Tele-Immersion] that approaches speeds of 100 Gb/sec can greatly expand its influence in rehabilitation. Accordingly, these new networks will permit various peripherals attached to computers on this network to be connected and to act as fast as if connected to a local PC. This innovation may soon allow the development of previously unheard of networked rehabilitation systems. Rapid advances in this technology need to be coupled with an understanding of how human behavior is affected when immersed in the VE. Methods This paper will discuss various forms of VE that are currently available for rehabilitation. The characteristic of these new networks and examine how such networks might be used for extending the rehabilitation clinic to remote areas will be explained. In addition, we will present data from an immersive dynamic virtual environment united with motion of a posture platform to record biomechanical and physiological responses to combined visual, vestibular, and proprioceptive inputs. A 6 degree-of-freedom force plate provides measurements of moments exerted on the base of support. Kinematic data from the head, trunk, and lower limb was collected using 3-D video motion analysis. Results Our data suggest that when there is a confluence of meaningful inputs, neither vision, vestibular, or proprioceptive inputs are suppressed in healthy adults; the postural response is modulated by all existing sensory signals in a non-additive fashion. Individual perception of the sensory structure appears to be a significant component of the response to these protocols and underlies much of the observed response variability. Conclusion The ability to provide new technology for rehabilitation services is emerging as an important option for clinicians and patients. The use of data mining software would help analyze the incoming data to provide both the patient and the therapist with evaluation of the current treatment and modifications needed for future therapies. Quantification of individual perceptual styles in the VE will support development of individualized treatment programs. The virtual environment can be a valuable tool for therapeutic interventions that require adaptation to complex, multimodal environments. PMID:15679951

  12. Set as an Instance of a Real-World Visual-Cognitive Task

    ERIC Educational Resources Information Center

    Nyamsuren, Enkhbold; Taatgen, Niels A.

    2013-01-01

    Complex problem solving is often an integration of perceptual processing and deliberate planning. But what balances these two processes, and how do novices differ from experts? We investigate the interplay between these two in the game of SET. This article investigates how people combine bottom-up visual processes and top-down planning to succeed…

  13. Intact Visual Discrimination of Complex and Feature-Ambiguous Stimuli in the Absence of Perirhinal Cortex

    ERIC Educational Resources Information Center

    Squire, Larry R.; Levy, Daniel A.; Shrager, Yael

    2005-01-01

    The perirhinal cortex is known to be important for memory, but there has recently been interest in the possibility that it might also be involved in visual perceptual functions. In four experiments, we assessed visual discrimination ability and visual discrimination learning in severely amnesic patients with large medial temporal lobe lesions that…

  14. Formation of Partially and Fully Elaborated Generalized Equivalence Classes

    ERIC Educational Resources Information Center

    Fields, Lanny; Moss, Patricia

    2008-01-01

    Most complex categories observed in real-world settings consist of perceptually disparate stimuli, such as a picture of a person's face, the person's name as written, and the same name as heard, as well as dimensional variants of some or all of these stimuli. The stimuli function as members of a single partially or fully elaborated generalized…

  15. Slow Perceptual Processing at the Core of Developmental Dyslexia: A Parameter-Based Assessment of Visual Attention

    ERIC Educational Resources Information Center

    Stenneken, Prisca; Egetemeir, Johanna; Schulte-Korne, Gerd; Muller, Hermann J.; Schneider, Werner X.; Finke, Kathrin

    2011-01-01

    The cognitive causes as well as the neurological and genetic basis of developmental dyslexia, a complex disorder of written language acquisition, are intensely discussed with regard to multiple-deficit models. Accumulating evidence has revealed dyslexics' impairments in a variety of tasks requiring visual attention. The heterogeneity of these…

  16. Project Success for the SLD Child, Motor-Perception Activities.

    ERIC Educational Resources Information Center

    Wayne - Carroll Public Schools, Wayne, NE.

    Presented is a curriculum guide for a perceptual motor program which was developed by Project Success (Nebraska) through a Title III grant for language learning disabled elementary level students in kindergarten through grade 3. The program is said to be arranged in a hierarchy of skills ranging from simple to complex and to be written so that the…

  17. Understanding Perceptual Differences; An Exploration of Neurological-Perceptual Roots of Learning Disabilities with Suggestions for Diagnosis and Treatment.

    ERIC Educational Resources Information Center

    Monroe, George E.

    In exploring the bases of learning disabilities, the following areas are considered: a working definition of perceptual handicaps; the relationship of perceptual handicaps to IQ; diagnosing perceptual handicaps; effective learning experiences for the perceptually handicapped child; and recommendations for developing new curricula. The appendixes…

  18. Representation and disconnection in imaginal neglect.

    PubMed

    Rode, G; Cotton, F; Revol, P; Jacquin-Courtois, S; Rossetti, Y; Bartolomeo, P

    2010-08-01

    Patients with neglect failure to detect, orient, or respond to stimuli from a spatially confined region, usually on their left side. Often, the presence of perceptual input increases left omissions, while sensory deprivation decreases them, possibly by removing attention-catching right-sided stimuli (Bartolomeo, 2007). However, such an influence of visual deprivation on representational neglect was not observed in patients while they were imagining a map of France (Rode et al., 2007). Therefore, these patients with imaginal neglect either failed to generate the left side of mental images (Bisiach & Luzzatti, 1978), or suffered from a co-occurrence of deficits in automatic (bottom-up) and voluntary (top-down) orienting of attention. However, in Rode et al.'s experiment visual input was not directly relevant to the task; moreover, distraction from visual input might primarily manifest itself when representation guides somatomotor actions, beyond those involved in the generation and mental exploration of an internal map (Thomas, 1999). To explore these possibilities, we asked a patient with right hemisphere damage, R.D., to explore visual and imagined versions of a map of France in three conditions: (1) 'imagine the map in your mind' (imaginal); (2) 'describe a real map' (visual); and (3) 'list the names of French towns' (propositional). For the imaginal and visual conditions, verbal and manual pointing responses were collected; the task was also given before and after mental rotation of the map by 180 degrees . R.D. mentioned more towns on the right side of the map in the imaginal and visual conditions, but showed no representational deficit in the propositional condition. The rightward inner exploration bias in the imaginal and visual conditions was similar in magnitude and was not influenced by mental rotation or response type (verbal responses or manual pointing to locations on a map), thus suggesting that the representational deficit was robust and independent of perceptual input in R.D. Structural and diffusion MRI demonstrated damage to several white matter tracts in the right hemisphere and to the splenium of corpus callosum. A second right-brain damaged patient (P.P.), who showed signs of visual but not imaginal neglect, had damage to the same intra-hemispheric tracts, but the callosal connections were spared. Imaginal neglect in R.D. may result from fronto-parietal dysfunction impairing orientation towards left-sided items and posterior callosal disconnection preventing the symmetrical processing of spatial information from long-term memory. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  19. Computational model for perception of objects and motions.

    PubMed

    Yang, WenLu; Zhang, LiQing; Ma, LiBo

    2008-06-01

    Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist 'What' and 'Where' pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives 'where', for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The computational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.

  20. Neural responses in the macaque v1 to bar stimuli with various lengths presented on the blind spot.

    PubMed

    Matsumoto, Masayuki; Komatsu, Hidehiko

    2005-05-01

    Although there is no retinal input within the blind spot, it is filled with the same visual attributes as its surround. Earlier studies showed that neural responses are evoked at the retinotopic representation of the blind spot in the primary visual cortex (V1) when perceptual filling-in of a surface or completion of a bar occurs. To determine whether these neural responses correlate with perception, we recorded from V1 neurons whose receptive fields overlapped the blind spot. Bar stimuli of various lengths were presented at the blind spots of monkeys while they performed a fixation task. One end of the bar was fixed at a position outside the blind spot, and the position of the other end was varied. Perceived bar length was measured using a similar set of bar stimuli in human subjects. As long as one end of the bar was inside the blind spot, the perceived bar length remained constant, and when the bar exceeded the blind spot, perceptual completion occurred, and the perceived bar length increased substantially. Some V1 neurons of the monkey exhibited a significant increase in their activity when the bar exceeded the blind spot, even though the amount of the retinal stimulation increased only slightly. These response increases coincided with perceptual completion observed in human subjects and were much larger than would be expected from simple spatial summation and could not be explained by contextual modulation. We conclude that the completed bar appearing on the part of the receptive field embedded within the blind spot gave rise to the observed increase in neuronal activity.

Top