Sample records for motion perception task

  1. Impaired visual recognition of biological motion in schizophrenia.

    PubMed

    Kim, Jejoong; Doop, Mikisha L; Blake, Randolph; Park, Sohee

    2005-09-15

    Motion perception deficits have been suggested to be an important feature of schizophrenia but the behavioral consequences of such deficits are unknown. Biological motion refers to the movements generated by living beings. The human visual system rapidly and effortlessly detects and extracts socially relevant information from biological motion. A deficit in biological motion perception may have significant consequences for detecting and interpreting social information. Schizophrenia patients and matched healthy controls were tested on two visual tasks: recognition of human activity portrayed in point-light animations (biological motion task) and a perceptual control task involving detection of a grouped figure against the background noise (global-form task). Both tasks required detection of a global form against background noise but only the biological motion task required the extraction of motion-related information. Schizophrenia patients performed as well as the controls in the global-form task, but were significantly impaired on the biological motion task. In addition, deficits in biological motion perception correlated with impaired social functioning as measured by the Zigler social competence scale [Zigler, E., Levine, J. (1981). Premorbid competence in schizophrenia: what is being measured? Journal of Consulting and Clinical Psychology, 49, 96-105.]. The deficit in biological motion processing, which may be related to the previously documented deficit in global motion processing, could contribute to abnormal social functioning in schizophrenia.

  2. Video-Based Method of Quantifying Performance and Instrument Motion During Simulated Phonosurgery

    PubMed Central

    Conroy, Ellen; Surender, Ketan; Geng, Zhixian; Chen, Ting; Dailey, Seth; Jiang, Jack

    2015-01-01

    Objectives/Hypothesis To investigate the use of the Video-Based Phonomicrosurgery Instrument Tracking System to collect instrument position data during simulated phonomicrosurgery and calculate motion metrics using these data. We used this system to determine if novice subject motion metrics improved over 1 week of training. Study Design Prospective cohort study. Methods Ten subjects performed simulated surgical tasks once per day for 5 days. Instrument position data were collected and used to compute motion metrics (path length, depth perception, and motion smoothness). Data were analyzed to determine if motion metrics improved with practice time. Task outcome was also determined each day, and relationships between task outcome and motion metrics were used to evaluate the validity of motion metrics as indicators of surgical performance. Results Significant decreases over time were observed for path length (P <.001), depth perception (P <.001), and task outcome (P <.001). No significant change was observed for motion smoothness. Significant relationships were observed between task outcome and path length (P <.001), depth perception (P <.001), and motion smoothness (P <.001). Conclusions Our system can estimate instrument trajectory and provide quantitative descriptions of surgical performance. It may be useful for evaluating phonomicrosurgery performance. Path length and depth perception may be particularly useful indicators. PMID:24737286

  3. Deficient motion-defined and texture-defined figure-ground segregation in amblyopic children.

    PubMed

    Wang, Jane; Ho, Cindy S; Giaschi, Deborah E

    2007-01-01

    Motion-defined form deficits in the fellow eye and the amblyopic eye of children with amblyopia implicate possible direction-selective motion processing or static figure-ground segregation deficits. Deficient motion-defined form perception in the fellow eye of amblyopic children may not be fully accounted for by a general motion processing deficit. This study investigates the contribution of figure-ground segregation deficits to the motion-defined form perception deficits in amblyopia. Performances of 6 amblyopic children (5 anisometropic, 1 anisostrabismic) and 32 control children with normal vision were assessed on motion-defined form, texture-defined form, and global motion tasks. Performance on motion-defined and texture-defined form tasks was significantly worse in amblyopic children than in control children. Performance on global motion tasks was not significantly different between the 2 groups. Faulty figure-ground segregation mechanisms are likely responsible for the observed motion-defined form perception deficits in amblyopia.

  4. Behavioural evidence for distinct mechanisms related to global and biological motion perception.

    PubMed

    Miller, Louisa; Agnew, Hannah C; Pilz, Karin S

    2018-01-01

    The perception of human motion is a vital ability in our daily lives. Human movement recognition is often studied using point-light stimuli in which dots represent the joints of a moving person. Depending on task and stimulus, the local motion of the single dots, and the global form of the stimulus can be used to discriminate point-light stimuli. Previous studies often measured motion coherence for global motion perception and contrasted it with performance in biological motion perception to assess whether difficulties in biological motion processing are related to more general difficulties with motion processing. However, it is so far unknown as to how performance in global motion tasks relates to the ability to use local motion or global form to discriminate point-light stimuli. Here, we investigated this relationship in more detail. In Experiment 1, we measured participants' ability to discriminate the facing direction of point-light stimuli that contained primarily local motion, global form, or both. In Experiment 2, we embedded point-light stimuli in noise to assess whether previously found relationships in task performance are related to the ability to detect signal in noise. In both experiments, we also assessed motion coherence thresholds from random-dot kinematograms. We found relationships between performances for the different biological motion stimuli, but performance for global and biological motion perception was unrelated. These results are in accordance with previous neuroimaging studies that highlighted distinct areas for global and biological motion perception in the dorsal pathway, and indicate that results regarding the relationship between global motion perception and biological motion perception need to be interpreted with caution. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Motion perception tasks as potential correlates to driving difficulty in the elderly

    NASA Astrophysics Data System (ADS)

    Raghuram, A.; Lakshminarayanan, V.

    2006-09-01

    Changes in the demographics indicates that the population older than 65 is on the rise because of the aging of the ‘baby boom’ generation. This aging trend and driving related accident statistics reveal the need for procedures and tests that would assess the driving ability of older adults and predict whether they would be safe or unsafe drivers. Literature shows that an attention based test called the useful field of view (UFOV) was a significant predictor of accident rates compared to any other visual function tests. The present study evaluates a qualitative trend on using motion perception tasks as a potential visual perceptual correlates in screening elderly drivers who might have difficulty in driving. Data was collected from 15 older subjects with a mean age of 71. Motion perception tasks included—speed discrimination with radial and lamellar motion, time to collision using prediction motion and estimating direction of heading. A motion index score was calculated which was indicative of performance on all of the above-mentioned motion tasks. Scores on visual attention was assessed using UFOV. A driving habit questionnaire was also administered for a self report on the driving difficulties and accident rates. A qualitative trend based on frequency distributions show that thresholds on the motion perception tasks are successful in identifying subjects who reported to have had difficulty in certain aspects of driving and had accidents. Correlation between UFOV and motion index scores was not significant indicating that probably different aspects of visual information processing that are crucial to driving behaviour are being tapped by these two paradigms. UFOV and motion perception tasks together can be a better predictor for identifying at risk or safe drivers than just using either one of them.

  6. Schematic and realistic biological motion identification in children with high-functioning autism spectrum disorder

    PubMed Central

    Wright, Kristyn; Kelley, Elizabeth; Poulin-Dubois, Diane

    2014-01-01

    Research investigating biological motion perception in children with ASD has revealed conflicting findings concerning whether impairments in biological motion perception exist. The current study investigated how children with high-functioning ASD (HF-ASD) performed on two tasks of biological motion identification: a novel schematic motion identification task and a point-light biological motion identification task. Twenty-two HFASD children were matched with 21 TD children on gender, non-verbal mental, and chronological, age (M years = 6.72). On both tasks, HF-ASD children performed with similar accuracy as TD children. Across groups, children performed better on animate than on inanimate trials of both tasks. These findings suggest that HF-ASD children's identification of both realistic and schematic biological motion identification is unimpaired. PMID:25395988

  7. Comparison of two Simon tasks: neuronal correlates of conflict resolution based on coherent motion perception.

    PubMed

    Wittfoth, Matthias; Buck, Daniela; Fahle, Manfred; Herrmann, Manfred

    2006-08-15

    The present study aimed at characterizing the neural correlates of conflict resolution in two variations of the Simon effect. We introduced two different Simon tasks where subjects had to identify shapes on the basis of form-from-motion perception (FFMo) within a randomly moving dot field, while (1) motion direction (motion-based Simon task) or (2) stimulus location (location-based Simon task) had to be ignored. Behavioral data revealed that both types of Simon tasks induced highly significant interference effects. Using event-related fMRI, we could demonstrate that both tasks share a common cluster of activated brain regions during conflict resolution (pre-supplementary motor area (pre-SMA), superior parietal lobule (SPL), and cuneus) but also show task-specific activation patterns (left superior temporal cortex in the motion-based, and the left fusiform gyrus in the location-based Simon task). Although motion-based and location-based Simon tasks are conceptually very similar (Type 3 stimulus-response ensembles according to the taxonomy of [Kornblum, S., Stevens, G. (2002). Sequential effects of dimensional overlap: findings and issues. In: Prinz, W., Hommel., B. (Eds.), Common mechanism in perception and action. Oxford University Press, Oxford, pp. 9-54]) conflict resolution in both tasks results in the activation of different task-specific regions probably related to the different sources of task-irrelevant information. Furthermore, the present data give evidence those task-specific regions are most likely to detect the relationship between task-relevant and task-irrelevant information.

  8. Motion direction discrimination training reduces perceived motion repulsion.

    PubMed

    Jia, Ke; Li, Sheng

    2017-04-01

    Participants often exaggerate the perceived angular separation between two simultaneously presented motion stimuli, which is referred to as motion repulsion. The overestimation helps participants differentiate between the two superimposed motion directions, yet it causes the impairment of direction perception. Since direction perception can be refined through perceptual training, we here attempted to investigate whether the training of a direction discrimination task changes the amount of motion repulsion. Our results showed a direction-specific learning effect, which was accompanied by a reduced amount of motion repulsion both for the trained and the untrained directions. The reduction of the motion repulsion disappeared when the participants were trained on a luminance discrimination task (control experiment 1) or a speed discrimination task (control experiment 2), ruling out any possible interpretation in terms of adaptation or training-induced attentional bias. Furthermore, training with a direction discrimination task along a direction 150° away from both directions in the transparent stimulus (control experiment 3) also had little effect on the amount of motion repulsion, ruling out the contribution of task learning. The changed motion repulsion observed in the main experiment was consistent with the prediction of the recurrent model of perceptual learning. Therefore, our findings demonstrate that training in direction discrimination can benefit the precise direction perception of the transparent stimulus and provide new evidence for the recurrent model of perceptual learning.

  9. A Role for Mouse Primary Visual Cortex in Motion Perception.

    PubMed

    Marques, Tiago; Summers, Mathew T; Fioreze, Gabriela; Fridman, Marina; Dias, Rodrigo F; Feller, Marla B; Petreanu, Leopoldo

    2018-06-04

    Visual motion is an ethologically important stimulus throughout the animal kingdom. In primates, motion perception relies on specific higher-order cortical regions. Although mouse primary visual cortex (V1) and higher-order visual areas show direction-selective (DS) responses, their role in motion perception remains unknown. Here, we tested whether V1 is involved in motion perception in mice. We developed a head-fixed discrimination task in which mice must report their perceived direction of motion from random dot kinematograms (RDKs). After training, mice made around 90% correct choices for stimuli with high coherence and performed significantly above chance for 16% coherent RDKs. Accuracy increased with both stimulus duration and visual field coverage of the stimulus, suggesting that mice in this task integrate motion information in time and space. Retinal recordings showed that thalamically projecting On-Off DS ganglion cells display DS responses when stimulated with RDKs. Two-photon calcium imaging revealed that neurons in layer (L) 2/3 of V1 display strong DS tuning in response to this stimulus. Thus, RDKs engage motion-sensitive retinal circuits as well as downstream visual cortical areas. Contralateral V1 activity played a key role in this motion direction discrimination task because its reversible inactivation with muscimol led to a significant reduction in performance. Neurometric-psychometric comparisons showed that an ideal observer could solve the task with the information encoded in DS L2/3 neurons. Motion discrimination of RDKs presents a powerful behavioral tool for dissecting the role of retino-forebrain circuits in motion processing. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. High-level, but not low-level, motion perception is impaired in patients with schizophrenia.

    PubMed

    Kandil, Farid I; Pedersen, Anya; Wehnes, Jana; Ohrmann, Patricia

    2013-01-01

    Smooth pursuit eye movements are compromised in patients with schizophrenia and their first-degree relatives. Although research has demonstrated that the motor components of smooth pursuit eye movements are intact, motion perception has been shown to be impaired. In particular, studies have consistently revealed deficits in performance on tasks specific to the high-order motion area V5 (middle temporal area, MT) in patients with schizophrenia. In contrast, data from low-level motion detectors in the primary visual cortex (V1) have been inconsistent. To differentiate between low-level and high-level visual motion processing, we applied a temporal-order judgment task for motion events and a motion-defined figure-ground segregation task using patients with schizophrenia and healthy controls. Successful judgments in both tasks rely on the same low-level motion detectors in the V1; however, the first task is further processed in the higher-order motion area MT in the magnocellular (dorsal) pathway, whereas the second task requires subsequent computations in the parvocellular (ventral) pathway in visual area V4 and the inferotemporal cortex (IT). These latter structures are supposed to be intact in schizophrenia. Patients with schizophrenia revealed a significantly impaired temporal resolution on the motion-based temporal-order judgment task but only mild impairment in the motion-based segregation task. These results imply that low-level motion detection in V1 is not, or is only slightly, compromised; furthermore, our data restrain the locus of the well-known deficit in motion detection to areas beyond the primary visual cortex.

  11. Perception of Biological Motion in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Freitag, Christine M.; Konrad, Carsten; Haberlen, Melanie; Kleser, Christina; von Gontard, Alexander; Reith, Wolfgang; Troje, Nikolaus F.; Krick, Christoph

    2008-01-01

    In individuals with autism or autism-spectrum-disorder (ASD), conflicting results have been reported regarding the processing of biological motion tasks. As biological motion perception and recognition might be related to impaired imitation, gross motor skills and autism specific psychopathology in individuals with ASD, we performed a functional…

  12. The Coordination Dynamics of Observational Learning: Relative Motion Direction and Relative Phase as Informational Content Linking Action-Perception to Action-Production.

    PubMed

    Buchanan, John J

    2016-01-01

    The primary goal of this chapter is to merge together the visual perception perspective of observational learning and the coordination dynamics theory of pattern formation in perception and action. Emphasis is placed on identifying movement features that constrain and inform action-perception and action-production processes. Two sources of visual information are examined, relative motion direction and relative phase. The visual perception perspective states that the topological features of relative motion between limbs and joints remains invariant across an actor's motion and therefore are available for pickup by an observer. Relative phase has been put forth as an informational variable that links perception to action within the coordination dynamics theory. A primary assumption of the coordination dynamics approach is that environmental information is meaningful only in terms of the behavior it modifies. Across a series of single limb tasks and bimanual tasks it is shown that the relative motion and relative phase between limbs and joints is picked up through visual processes and supports observational learning of motor skills. Moreover, internal estimations of motor skill proficiency and competency are linked to the informational content found in relative motion and relative phase. Thus, the chapter links action to perception and vice versa and also links cognitive evaluations to the coordination dynamics that support action-perception and action-production processes.

  13. The role of human ventral visual cortex in motion perception

    PubMed Central

    Saygin, Ayse P.; Lorenzi, Lauren J.; Egan, Ryan; Rees, Geraint; Behrmann, Marlene

    2013-01-01

    Visual motion perception is fundamental to many aspects of visual perception. Visual motion perception has long been associated with the dorsal (parietal) pathway and the involvement of the ventral ‘form’ (temporal) visual pathway has not been considered critical for normal motion perception. Here, we evaluated this view by examining whether circumscribed damage to ventral visual cortex impaired motion perception. The perception of motion in basic, non-form tasks (motion coherence and motion detection) and complex structure-from-motion, for a wide range of motion speeds, all centrally displayed, was assessed in five patients with a circumscribed lesion to either the right or left ventral visual pathway. Patients with a right, but not with a left, ventral visual lesion displayed widespread impairments in central motion perception even for non-form motion, for both slow and for fast speeds, and this held true independent of the integrity of areas MT/V5, V3A or parietal regions. In contrast with the traditional view in which only the dorsal visual stream is critical for motion perception, these novel findings implicate a more distributed circuit in which the integrity of the right ventral visual pathway is also necessary even for the perception of non-form motion. PMID:23983030

  14. Perception of Self-Motion and Regulation of Walking Speed in Young-Old Adults.

    PubMed

    Lalonde-Parsi, Marie-Jasmine; Lamontagne, Anouk

    2015-07-01

    Whether a reduced perception of self-motion contributes to poor walking speed adaptations in older adults is unknown. In this study, speed discrimination thresholds (perceptual task) and walking speed adaptations (walking task) were compared between young (19-27 years) and young-old individuals (63-74 years), and the relationship between the performance on the two tasks was examined. Participants were evaluated while viewing a virtual corridor in a helmet-mounted display. Speed discrimination thresholds were determined using a staircase procedure. Walking speed modulation was assessed on a self-paced treadmill while exposed to different self-motion speeds ranging from 0.25 to 2 times the participants' comfortable speed. For each speed, participants were instructed to match the self-motion speed described by the moving corridor. On the walking task, participants displayed smaller walking speed errors at comfortable walking speeds compared with slower of faster speeds. The young-old adults presented larger speed discrimination thresholds (perceptual experiment) and larger walking speed errors (walking experiment) compared with young adults. Larger walking speed errors were associated with higher discrimination thresholds. The enhanced performance on the walking task at comfortable speed suggests that intersensory calibration processes are influenced by experience, hence optimized for frequently encountered conditions. The altered performance of the young-old adults on the perceptual and walking tasks, as well as the relationship observed between the two tasks, suggest that a poor perception of visual motion information may contribute to the poor walking speed adaptations that arise with aging.

  15. Does language guide event perception? Evidence from eye movements

    PubMed Central

    Papafragou, Anna; Hulbert, Justin; Trueswell, John

    2008-01-01

    Languages differ in how they encode motion. When describing bounded motion, English speakers typically use verbs that convey information about manner (e.g., slide, skip, walk) rather than path (e.g., approach, ascend), whereas Greek speakers do the opposite. We investigated whether this strong cross-language difference influences how people allocate attention during motion perception. We compared eye movements from Greek and English speakers as they viewed motion events while (a) preparing verbal descriptions, or (b) memorizing the events. During the verbal description task, speakers’ eyes rapidly focused on the event components typically encoded in their native language, generating significant cross-language differences even during the first second of motion onset. However, when freely inspecting ongoing events, as in the memorization task, people allocated attention similarly regardless of the language they speak. Differences between language groups arose only after the motion stopped, such that participants spontaneously studied those aspects of the scene that their language does not routinely encode in verbs. These findings offer a novel perspective on the relation between language and perceptual/cognitive processes. They indicate that attention allocation during event perception is not affected by the perceiver’s native language; effects of language arise only when linguistic forms are recruited to achieve the task, such as when committing facts to memory. PMID:18395705

  16. People can understand descriptions of motion without activating visual motion brain regions

    PubMed Central

    Dravida, Swethasri; Saxe, Rebecca; Bedny, Marina

    2013-01-01

    What is the relationship between our perceptual and linguistic neural representations of the same event? We approached this question by asking whether visual perception of motion and understanding linguistic depictions of motion rely on the same neural architecture. The same group of participants took part in two language tasks and one visual task. In task 1, participants made semantic similarity judgments with high motion (e.g., “to bounce”) and low motion (e.g., “to look”) words. In task 2, participants made plausibility judgments for passages describing movement (“A centaur hurled a spear … ”) or cognitive events (“A gentleman loved cheese …”). Task 3 was a visual motion localizer in which participants viewed animations of point-light walkers, randomly moving dots, and stationary dots changing in luminance. Based on the visual motion localizer we identified classic visual motion areas of the temporal (MT/MST and STS) and parietal cortex (inferior and superior parietal lobules). We find that these visual cortical areas are largely distinct from neural responses to linguistic depictions of motion. Motion words did not activate any part of the visual motion system. Motion passages produced a small response in the right superior parietal lobule, but none of the temporal motion regions. These results suggest that (1) as compared to words, rich language stimuli such as passages are more likely to evoke mental imagery and more likely to affect perceptual circuits and (2) effects of language on the visual system are more likely in secondary perceptual areas as compared to early sensory areas. We conclude that language and visual perception constitute distinct but interacting systems. PMID:24009592

  17. Neural correlates of coherent and biological motion perception in autism.

    PubMed

    Koldewyn, Kami; Whitney, David; Rivera, Susan M

    2011-09-01

    Recent evidence suggests those with autism may be generally impaired in visual motion perception. To examine this, we investigated both coherent and biological motion processing in adolescents with autism employing both psychophysical and fMRI methods. Those with autism performed as well as matched controls during coherent motion perception but had significantly higher thresholds for biological motion perception. The autism group showed reduced posterior Superior Temporal Sulcus (pSTS), parietal and frontal activity during a biological motion task while showing similar levels of activity in MT+/V5 during both coherent and biological motion trials. Activity in MT+/V5 was predictive of individual coherent motion thresholds in both groups. Activity in dorsolateral prefrontal cortex (DLPFC) and pSTS was predictive of biological motion thresholds in control participants but not in those with autism. Notably, however, activity in DLPFC was negatively related to autism symptom severity. These results suggest that impairments in higher-order social or attentional networks may underlie visual motion deficits observed in autism. © 2011 Blackwell Publishing Ltd.

  18. Neural correlates of coherent and biological motion perception in autism

    PubMed Central

    Koldewyn, Kami; Whitney, David; Rivera, Susan M.

    2011-01-01

    Recent evidence suggests those with autism may be generally impaired in visual motion perception. To examine this, we investigated both coherent and biological motion processing in adolescents with autism employing both psychophysical and fMRI methods. Those with autism performed as well as matched controls during coherent motion perception but had significantly higher thresholds for biological motion perception. The autism group showed reduced posterior Superior Temporal Sulcus (pSTS), parietal and frontal activity during a biological motion task while showing similar levels of activity in MT+/V5 during both coherent and biological motion trials. Activity in MT+/V5 was predictive of individual coherent motion thresholds in both groups. Activity in dorsolateral prefrontal cortex (DLPFC) and pSTS was predictive of biological motion thresholds in control participants but not in those with autism. Notably, however, activity in DLPFC was negatively related to autism symptom severity. These results suggest that impairments in higher-order social or attentional networks may underlie visual motion deficits observed in autism. PMID:21884323

  19. Spatial perception predicts laparoscopic skills on virtual reality laparoscopy simulator.

    PubMed

    Hassan, I; Gerdes, B; Koller, M; Dick, B; Hellwig, D; Rothmund, M; Zielke, A

    2007-06-01

    This study evaluates the influence of visual-spatial perception on laparoscopic performance of novices with a virtual reality simulator (LapSim(R)). Twenty-four novices completed standardized tests of visual-spatial perception (Lameris Toegepaste Natuurwetenschappelijk Onderzoek [TNO] Test(R) and Stumpf-Fay Cube Perspectives Test(R)) and laparoscopic skills were assessed objectively, while performing 1-h practice sessions on the LapSim(R), comprising of coordination, cutting, and clip application tasks. Outcome variables included time to complete the tasks, economy of motion as well as total error scores, respectively. The degree of visual-spatial perception correlated significantly with laparoscopic performance on the LapSim(R) scores. Participants with a high degree of spatial perception (Group A) performed the tasks faster than those (Group B) who had a low degree of spatial perception (p = 0.001). Individuals with a high degree of spatial perception also scored better for economy of motion (p = 0.021), tissue damage (p = 0.009), and total error (p = 0.007). Among novices, visual-spatial perception is associated with manual skills performed on a virtual reality simulator. This result may be important for educators to develop adequate training programs that can be individually adapted.

  20. Perception of Biological Motion in Schizophrenia and Healthy Individuals: A Behavioral and fMRI Study

    PubMed Central

    Kim, Jejoong; Park, Sohee; Blake, Randolph

    2011-01-01

    Background Anomalous visual perception is a common feature of schizophrenia plausibly associated with impaired social cognition that, in turn, could affect social behavior. Past research suggests impairment in biological motion perception in schizophrenia. Behavioral and functional magnetic resonance imaging (fMRI) experiments were conducted to verify the existence of this impairment, to clarify its perceptual basis, and to identify accompanying neural concomitants of those deficits. Methodology/Findings In Experiment 1, we measured ability to detect biological motion portrayed by point-light animations embedded within masking noise. Experiment 2 measured discrimination accuracy for pairs of point-light biological motion sequences differing in the degree of perturbation of the kinematics portrayed in those sequences. Experiment 3 measured BOLD signals using event-related fMRI during a biological motion categorization task. Compared to healthy individuals, schizophrenia patients performed significantly worse on both the detection (Experiment 1) and discrimination (Experiment 2) tasks. Consistent with the behavioral results, the fMRI study revealed that healthy individuals exhibited strong activation to biological motion, but not to scrambled motion in the posterior portion of the superior temporal sulcus (STSp). Interestingly, strong STSp activation was also observed for scrambled or partially scrambled motion when the healthy participants perceived it as normal biological motion. On the other hand, STSp activation in schizophrenia patients was not selective to biological or scrambled motion. Conclusion Schizophrenia is accompanied by difficulties discriminating biological from non-biological motion, and associated with those difficulties are altered patterns of neural responses within brain area STSp. The perceptual deficits exhibited by schizophrenia patients may be an exaggerated manifestation of neural events within STSp associated with perceptual errors made by healthy observers on these same tasks. The present findings fit within the context of theories of delusion involving perceptual and cognitive processes. PMID:21625492

  1. Local and global aspects of biological motion perception in children born at very low birth weight

    PubMed Central

    Williamson, K. E.; Jakobson, L. S.; Saunders, D. R.; Troje, N. F.

    2015-01-01

    Biological motion perception can be assessed using a variety of tasks. In the present study, 8- to 11-year-old children born prematurely at very low birth weight (<1500 g) and matched, full-term controls completed tasks that required the extraction of local motion cues, the ability to perceptually group these cues to extract information about body structure, and the ability to carry out higher order processes required for action recognition and person identification. Preterm children exhibited difficulties in all 4 aspects of biological motion perception. However, intercorrelations between test scores were weak in both full-term and preterm children—a finding that supports the view that these processes are relatively independent. Preterm children also displayed more autistic-like traits than full-term peers. In preterm (but not full-term) children, these traits were negatively correlated with performance in the task requiring structure-from-motion processing, r(30) = −.36, p < .05), but positively correlated with the ability to extract identity, r(30) = .45, p < .05). These findings extend previous reports of vulnerability in systems involved in processing dynamic cues in preterm children and suggest that a core deficit in social perception/cognition may contribute to the development of the social and behavioral difficulties even in members of this population who are functioning within the normal range intellectually. The results could inform the development of screening, diagnostic, and intervention tools. PMID:25103588

  2. Impaired Perception of Biological Motion in Parkinson’s Disease

    PubMed Central

    Jaywant, Abhishek; Shiffrar, Maggie; Roy, Serge; Cronin-Golomb, Alice

    2016-01-01

    Objective We examined biological motion perception in Parkinson’s disease (PD). Biological motion perception is related to one’s own motor function and depends on the integrity of brain areas affected in PD, including posterior superior temporal sulcus. If deficits in biological motion perception exist, they may be specific to perceiving natural/fast walking patterns that individuals with PD can no longer perform, and may correlate with disease-related motor dysfunction. Method 26 non-demented individuals with PD and 24 control participants viewed videos of point-light walkers and scrambled versions that served as foils, and indicated whether each video depicted a human walking. Point-light walkers varied by gait type (natural, parkinsonian) and speed (0.5, 1.0, 1.5 m/s). Participants also completed control tasks (object motion, coherent motion perception), a contrast sensitivity assessment, and a walking assessment. Results The PD group demonstrated significantly less sensitivity to biological motion than the control group (p<.001, Cohen’s d=1.22), regardless of stimulus gait type or speed, with a less substantial deficit in object motion perception (p=.02, Cohen’s d=.68). There was no group difference in coherent motion perception. Although individuals with PD had slower walking speed and shorter stride length than control participants, gait parameters did not correlate with biological motion perception. Contrast sensitivity and coherent motion perception also did not correlate with biological motion perception. Conclusion PD leads to a deficit in perceiving biological motion, which is independent of gait dysfunction and low-level vision changes, and may therefore arise from difficulty perceptually integrating form and motion cues in posterior superior temporal sulcus. PMID:26949927

  3. Relationships of a Circular Singer Arm Gesture to Acoustical and Perceptual Measures of Singing: A Motion Capture Study

    ERIC Educational Resources Information Center

    Brunkan, Melissa C.

    2016-01-01

    The purpose of this study was to validate previous research that suggests using movement in conjunction with singing tasks can affect intonation and perception of the task. Singers (N = 49) were video and audio recorded, using a motion capture system, while singing a phrase from a familiar song, first with no motion, and then while doing a low,…

  4. Central Inhibition Ability Modulates Attention-Induced Motion Blindness

    ERIC Educational Resources Information Center

    Milders, Maarten; Hay, Julia; Sahraie, Arash; Niedeggen, Michael

    2004-01-01

    Impaired motion perception can be induced in normal observers in a rapid serial visual presentation task. Essential for this effect is the presence of motion distractors prior to the motion target, and we proposed that this attention-induced motion blindness results from high-level inhibition produced by the distractors. To investigate this, we…

  5. Self-motion perception in autism is compromised by visual noise but integrated optimally across multiple senses

    PubMed Central

    Zaidel, Adam; Goin-Kochel, Robin P.; Angelaki, Dora E.

    2015-01-01

    Perceptual processing in autism spectrum disorder (ASD) is marked by superior low-level task performance and inferior complex-task performance. This observation has led to theories of defective integration in ASD of local parts into a global percept. Despite mixed experimental results, this notion maintains widespread influence and has also motivated recent theories of defective multisensory integration in ASD. Impaired ASD performance in tasks involving classic random dot visual motion stimuli, corrupted by noise as a means to manipulate task difficulty, is frequently interpreted to support this notion of global integration deficits. By manipulating task difficulty independently of visual stimulus noise, here we test the hypothesis that heightened sensitivity to noise, rather than integration deficits, may characterize ASD. We found that although perception of visual motion through a cloud of dots was unimpaired without noise, the addition of stimulus noise significantly affected adolescents with ASD, more than controls. Strikingly, individuals with ASD demonstrated intact multisensory (visual–vestibular) integration, even in the presence of noise. Additionally, when vestibular motion was paired with pure visual noise, individuals with ASD demonstrated a different strategy than controls, marked by reduced flexibility. This result could be simulated by using attenuated (less reliable) and inflexible (not experience-dependent) Bayesian priors in ASD. These findings question widespread theories of impaired global and multisensory integration in ASD. Rather, they implicate increased sensitivity to sensory noise and less use of prior knowledge in ASD, suggesting increased reliance on incoming sensory information. PMID:25941373

  6. Motion coherence affects human perception and pursuit similarly.

    PubMed

    Beutter, B R; Stone, L S

    2000-01-01

    Pursuit and perception both require accurate information about the motion of objects. Recovering the motion of objects by integrating the motion of their components is a difficult visual task. Successful integration produces coherent global object motion, while a failure to integrate leaves the incoherent local motions of the components unlinked. We compared the ability of perception and pursuit to perform motion integration by measuring direction judgments and the concomitant eye-movement responses to line-figure parallelograms moving behind stationary rectangular apertures. The apertures were constructed such that only the line segments corresponding to the parallelogram's sides were visible; thus, recovering global motion required the integration of the local segment motion. We investigated several potential motion-integration rules by using stimuli with different object, vector-average, and line-segment terminator-motion directions. We used an oculometric decision rule to directly compare direction discrimination for pursuit and perception. For visible apertures, the percept was a coherent object, and both the pursuit and perceptual performance were close to the object-motion prediction. For invisible apertures, the percept was incoherently moving segments, and both the pursuit and perceptual performance were close to the terminator-motion prediction. Furthermore, both psychometric and oculometric direction thresholds were much higher for invisible apertures than for visible apertures. We constructed a model in which both perception and pursuit are driven by a shared motion-processing stage, with perception having an additional input from an independent static-processing stage. Model simulations were consistent with our perceptual and oculomotor data. Based on these results, we propose the use of pursuit as an objective and continuous measure of perceptual coherence. Our results support the view that pursuit and perception share a common motion-integration stage, perhaps within areas MT or MST.

  7. Motion coherence affects human perception and pursuit similarly

    NASA Technical Reports Server (NTRS)

    Beutter, B. R.; Stone, L. S.

    2000-01-01

    Pursuit and perception both require accurate information about the motion of objects. Recovering the motion of objects by integrating the motion of their components is a difficult visual task. Successful integration produces coherent global object motion, while a failure to integrate leaves the incoherent local motions of the components unlinked. We compared the ability of perception and pursuit to perform motion integration by measuring direction judgments and the concomitant eye-movement responses to line-figure parallelograms moving behind stationary rectangular apertures. The apertures were constructed such that only the line segments corresponding to the parallelogram's sides were visible; thus, recovering global motion required the integration of the local segment motion. We investigated several potential motion-integration rules by using stimuli with different object, vector-average, and line-segment terminator-motion directions. We used an oculometric decision rule to directly compare direction discrimination for pursuit and perception. For visible apertures, the percept was a coherent object, and both the pursuit and perceptual performance were close to the object-motion prediction. For invisible apertures, the percept was incoherently moving segments, and both the pursuit and perceptual performance were close to the terminator-motion prediction. Furthermore, both psychometric and oculometric direction thresholds were much higher for invisible apertures than for visible apertures. We constructed a model in which both perception and pursuit are driven by a shared motion-processing stage, with perception having an additional input from an independent static-processing stage. Model simulations were consistent with our perceptual and oculomotor data. Based on these results, we propose the use of pursuit as an objective and continuous measure of perceptual coherence. Our results support the view that pursuit and perception share a common motion-integration stage, perhaps within areas MT or MST.

  8. Spared Ability to Perceive Direction of Locomotor Heading and Scene-Relative Object Movement Despite Inability to Perceive Relative Motion

    PubMed Central

    Vaina, Lucia M.; Buonanno, Ferdinando; Rushton, Simon K.

    2014-01-01

    Background All contemporary models of perception of locomotor heading from optic flow (the characteristic patterns of retinal motion that result from self-movement) begin with relative motion. Therefore it would be expected that an impairment on perception of relative motion should impact on the ability to judge heading and other 3D motion tasks. Material/Methods We report two patients with occipital lobe lesions whom we tested on a battery of motion tasks. Patients were impaired on all tests that involved relative motion in plane (motion discontinuity, form from differences in motion direction or speed). Despite this they retained the ability to judge their direction of heading relative to a target. A potential confound is that observers can derive information about heading from scale changes bypassing the need to use optic flow. Therefore we ran further experiments in which we isolated optic flow and scale change. Results Patients’ performance was in normal ranges on both tests. The finding that ability to perceive heading can be retained despite an impairment on ability to judge relative motion questions the assumption that heading perception proceeds from initial processing of relative motion. Furthermore, on a collision detection task, SS and SR’s performance was significantly better for simulated forward movement of the observer in the 3D scene, than for the static observer. This suggests that in spite of severe deficits on relative motion in the frontoparlel (xy) plane, information from self-motion helped identification objects moving along an intercept 3D relative motion trajectory. Conclusions This result suggests a potential use of a flow parsing strategy to detect in a 3D world the trajectory of moving objects when the observer is moving forward. These results have implications for developing rehabilitation strategies for deficits in visually guided navigation. PMID:25183375

  9. Eye Movements in Darkness Modulate Self-Motion Perception.

    PubMed

    Clemens, Ivar Adrianus H; Selen, Luc P J; Pomante, Antonella; MacNeilage, Paul R; Medendorp, W Pieter

    2017-01-01

    During self-motion, humans typically move the eyes to maintain fixation on the stationary environment around them. These eye movements could in principle be used to estimate self-motion, but their impact on perception is unknown. We had participants judge self-motion during different eye-movement conditions in the absence of full-field optic flow. In a two-alternative forced choice task, participants indicated whether the second of two successive passive lateral whole-body translations was longer or shorter than the first. This task was used in two experiments. In the first ( n = 8), eye movements were constrained differently in the two translation intervals by presenting either a world-fixed or body-fixed fixation point or no fixation point at all (allowing free gaze). Results show that perceived translations were shorter with a body-fixed than a world-fixed fixation point. A linear model indicated that eye-movement signals received a weight of ∼25% for the self-motion percept. This model was independently validated in the trials without a fixation point (free gaze). In the second experiment ( n = 10), gaze was free during both translation intervals. Results show that the translation with the larger eye-movement excursion was judged more often to be larger than chance, based on an oculomotor choice probability analysis. We conclude that eye-movement signals influence self-motion perception, even in the absence of visual stimulation.

  10. Eye Movements in Darkness Modulate Self-Motion Perception

    PubMed Central

    Pomante, Antonella

    2017-01-01

    Abstract During self-motion, humans typically move the eyes to maintain fixation on the stationary environment around them. These eye movements could in principle be used to estimate self-motion, but their impact on perception is unknown. We had participants judge self-motion during different eye-movement conditions in the absence of full-field optic flow. In a two-alternative forced choice task, participants indicated whether the second of two successive passive lateral whole-body translations was longer or shorter than the first. This task was used in two experiments. In the first (n = 8), eye movements were constrained differently in the two translation intervals by presenting either a world-fixed or body-fixed fixation point or no fixation point at all (allowing free gaze). Results show that perceived translations were shorter with a body-fixed than a world-fixed fixation point. A linear model indicated that eye-movement signals received a weight of ∼25% for the self-motion percept. This model was independently validated in the trials without a fixation point (free gaze). In the second experiment (n = 10), gaze was free during both translation intervals. Results show that the translation with the larger eye-movement excursion was judged more often to be larger than chance, based on an oculomotor choice probability analysis. We conclude that eye-movement signals influence self-motion perception, even in the absence of visual stimulation. PMID:28144623

  11. Deciding what to see: the role of intention and attention in the perception of apparent motion.

    PubMed

    Kohler, Axel; Haddad, Leila; Singer, Wolf; Muckli, Lars

    2008-03-01

    Apparent motion is an illusory perception of movement that can be induced by alternating presentations of static objects. Already in Wertheimer's early investigation of the phenomenon [Wertheimer, M. (1912). Experimentelle Studien über das Sehen von Bewegung. Zeitschrift fur Psychologie, 61, 161-265], he mentions that voluntary attention can influence the way in which an ambiguous apparent motion display is perceived. But until now, few studies have investigated how strong the modulation of apparent motion through attention can be under different stimulus and task conditions. We used bistable motion quartets of two different sizes, where the perception of vertical and horizontal motion is equally likely. Eleven observers participated in two experiments. In Experiment 1, participants were instructed to either (a) hold the current movement direction as long as possible, (b) passively view the stimulus, or (c) switch the movement directions as quickly as possible. With the respective instructions, observers could almost double phase durations in (a) and more than halve durations in (c) relative to the passive condition. This modulation effect was stronger for the large quartets. In Experiment 2, observers' attention was diverted from the stimulus by a detection task at fixation while they still had to report their conscious perception. This manipulation prolonged dominance durations for up to 100%. The experiments reveal a high susceptibility of ambiguous apparent motion to attentional modulation. We discuss how feature- and space-based attention mechanisms might contribute to those effects.

  12. Global processing in amblyopia: a review

    PubMed Central

    Hamm, Lisa M.; Black, Joanna; Dai, Shuan; Thompson, Benjamin

    2014-01-01

    Amblyopia is a neurodevelopmental disorder of the visual system that is associated with disrupted binocular vision during early childhood. There is evidence that the effects of amblyopia extend beyond the primary visual cortex to regions of the dorsal and ventral extra-striate visual cortex involved in visual integration. Here, we review the current literature on global processing deficits in observers with either strabismic, anisometropic, or deprivation amblyopia. A range of global processing tasks have been used to investigate the extent of the cortical deficit in amblyopia including: global motion perception, global form perception, face perception, and biological motion. These tasks appear to be differentially affected by amblyopia. In general, observers with unilateral amblyopia appear to show deficits for local spatial processing and global tasks that require the segregation of signal from noise. In bilateral cases, the global processing deficits are exaggerated, and appear to extend to specialized perceptual systems such as those involved in face processing. PMID:24987383

  13. Slushy weightings for the optimal pilot model. [considering visual tracking task

    NASA Technical Reports Server (NTRS)

    Dillow, J. D.; Picha, D. G.; Anderson, R. O.

    1975-01-01

    A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.

  14. Motion Perception and Manual Control Performance During Passive Tilt and Translation Following Space Flight

    NASA Technical Reports Server (NTRS)

    Clement, Gilles; Wood, Scott J.

    2010-01-01

    This joint ESA-NASA study is examining changes in motion perception following Space Shuttle flights and the operational implications of post-flight tilt-translation ambiguity for manual control performance. Vibrotactile feedback of tilt orientation is also being evaluated as a countermeasure to improve performance during a closed-loop nulling task. METHODS. Data has been collected on 5 astronaut subjects during 3 preflight sessions and during the first 8 days after Shuttle landings. Variable radius centrifugation (216 deg/s) combined with body translation (12-22 cm, peak-to-peak) is utilized to elicit roll-tilt perception (equivalent to 20 deg, peak-to-peak). A forward-backward moving sled (24-390 cm, peak-to-peak) with or without chair tilting in pitch is utilized to elicit pitch tilt perception (equivalent to 20 deg, peak-to-peak). These combinations are elicited at 0.15, 0.3, and 0.6 Hz for evaluating the effect of motion frequency on tilt-translation ambiguity. In both devices, a closed-loop nulling task is also performed during pseudorandom motion with and without vibrotactile feedback of tilt. All tests are performed in complete darkness. PRELIMINARY RESULTS. Data collection is currently ongoing. Results to date suggest there is a trend for translation motion perception to be increased at the low and medium frequencies on landing day compared to pre-flight. Manual control performance is improved with vibrotactile feedback. DISCUSSION. The results of this study indicate that post-flight recovery of motion perception and manual control performance is complete within 8 days following short-duration space missions. Vibrotactile feedback of tilt improves manual control performance both before and after flight.

  15. Quantification of reaction time and time perception during Space Shuttle operations

    NASA Technical Reports Server (NTRS)

    Ratino, D. A.; Repperger, D. W.; Goodyear, C.; Potor, G.; Rodriguez, L. E.

    1988-01-01

    A microprocessor-based test battery containing simple reaction time, choice reaction time, and time perception tasks was flown aboard a 1985 Space Shuttle flight. Data were obtained from four crew members. Individual subject means indicate a correlation between change in reaction time during the flight and the presence of space motion sickness symptoms. The time perception task results indicate that the shortest duration task time (2 s) is progressively overestimated as the mission proceeds and is statistically significant when comparing preflight and postflight baselines. The tasks that required longer periods of time to estimate (8, 12, and 16 s) are less affected.

  16. ZAG-Otolith: Modification of Otolith-Ocular Reflexes, Motion Perception and Manual Control during Variable Radius Centrifugation Following Space Flight

    NASA Technical Reports Server (NTRS)

    Wood, S. J.; Clarke, A. H.; Rupert, A. H.; Harm, D. L.; Clement, G. R.

    2009-01-01

    Two joint ESA-NASA studies are examining changes in otolith-ocular reflexes and motion perception following short duration space flights, and the operational implications of post-flight tilt-translation ambiguity for manual control performance. Vibrotactile feedback of tilt orientation is also being evaluated as a countermeasure to improve performance during a closed-loop nulling task. METHODS. Data is currently being collected on astronaut subjects during 3 preflight sessions and during the first 8 days after Shuttle landings. Variable radius centrifugation is utilized to elicit otolith reflexes in the lateral plane without concordant roll canal cues. Unilateral centrifugation (400 deg/s, 3.5 cm radius) stimulates one otolith positioned off-axis while the opposite side is centered over the axis of rotation. During this paradigm, roll-tilt perception is measured using a subjective visual vertical task and ocular counter-rolling is obtained using binocular video-oculography. During a second paradigm (216 deg/s, <20 cm radius), the effects of stimulus frequency (0.15 - 0.6 Hz) are examined on eye movements and motion perception. A closed-loop nulling task is also performed with and without vibrotactile display feedback of chair radial position. PRELIMINARY RESULTS. Data collection is currently ongoing. Results to date suggest there is a trend for perceived tilt and translation amplitudes to be increased at the low and medium frequencies on landing day compared to pre-flight. Manual control performance is improved with vibrotactile feedback. DISCUSSION. One result of this study will be to characterize the variability (gain, asymmetry) in both otolithocular responses and motion perception during variable radius centrifugation, and measure the time course of postflight recovery. This study will also address how adaptive changes in otolith-mediated reflexes correspond to one's ability to perform closed-loop nulling tasks following G-transitions, and whether manual control performance can be improved with vibrotactile feedback of orientation.

  17. Modification of Otolith-Ocular Reflexes, Motion Perception and Manual Control During Variable Radius Centrifugation Following Space Flight

    NASA Technical Reports Server (NTRS)

    Wood, Scott J.; Clarke, A. H.; Rupert, A. H.; Harm, D. L.; Clement, G. R.

    2009-01-01

    Two joint ESA-NASA studies are examining changes in otolith-ocular reflexes and motion perception following short duration space flights, and the operational implications of post-flight tilt-translation ambiguity for manual control performance. Vibrotactile feedback of tilt orientation is also being evaluated as a countermeasure to improve performance during a closed-loop nulling task. Data is currently being collected on astronaut subjects during 3 preflight sessions and during the first 8 days after Shuttle landings. Variable radius centrifugation is utilized to elicit otolith reflexes in the lateral plane without concordant roll canal cues. Unilateral centrifugation (400 deg/s, 3.5 cm radius) stimulates one otolith positioned off-axis while the opposite side is centered over the axis of rotation. During this paradigm, roll-tilt perception is measured using a subjective visual vertical task and ocular counter-rolling is obtained using binocular video-oculography. During a second paradigm (216 deg/s, less than 20 cm radius), the effects of stimulus frequency (0.15 - 0.6 Hz) are examined on eye movements and motion perception. A closed-loop nulling task is also performed with and without vibrotactile display feedback of chair radial position. Data collection is currently ongoing. Results to date suggest there is a trend for perceived tilt and translation amplitudes to be increased at the low and medium frequencies on landing day compared to pre-flight. Manual control performance is improved with vibrotactile feedback. One result of this study will be to characterize the variability (gain, asymmetry) in both otolith-ocular responses and motion perception during variable radius centrifugation, and measure the time course of post-flight recovery. This study will also address how adaptive changes in otolith-mediated reflexes correspond to one's ability to perform closed-loop nulling tasks following G-transitions, and whether manual control performance can be improved with vibrotactile feedback of orientation.

  18. The 50s cliff: a decline in perceptuo-motor learning, not a deficit in visual motion perception.

    PubMed

    Ren, Jie; Huang, Shaochen; Zhang, Jiancheng; Zhu, Qin; Wilson, Andrew D; Snapp-Childs, Winona; Bingham, Geoffrey P

    2015-01-01

    Previously, we measured perceptuo-motor learning rates across the lifespan and found a sudden drop in learning rates between ages 50 and 60, called the "50s cliff." The task was a unimanual visual rhythmic coordination task in which participants used a joystick to oscillate one dot in a display in coordination with another dot oscillated by a computer. Participants learned to produce a coordination with a 90° relative phase relation between the dots. Learning rates for participants over 60 were half those of younger participants. Given existing evidence for visual motion perception deficits in people over 60 and the role of visual motion perception in the coordination task, it remained unclear whether the 50s cliff reflected onset of this deficit or a genuine decline in perceptuo-motor learning. The current work addressed this question. Two groups of 12 participants in each of four age ranges (20s, 50s, 60s, 70s) learned to perform a bimanual coordination of 90° relative phase. One group trained with only haptic information and the other group with both haptic and visual information about relative phase. Both groups were tested in both information conditions at baseline and post-test. If the 50s cliff was caused by an age dependent deficit in visual motion perception, then older participants in the visual group should have exhibited less learning than those in the haptic group, which should not exhibit the 50s cliff, and older participants in both groups should have performed less well when tested with visual information. Neither of these expectations was confirmed by the results, so we concluded that the 50s cliff reflects a genuine decline in perceptuo-motor learning with aging, not the onset of a deficit in visual motion perception.

  19. Self-motion Perception Training: Thresholds Improve in the Light but not in the Dark

    PubMed Central

    Hartmann, Matthias; Furrer, Sarah; Herzog, Michael H.; Merfeld, Daniel M.; Mast, Fred W.

    2014-01-01

    We investigated perceptual learning in self-motion perception. Blindfolded participants were displaced leftward or rightward by means of a motion platform, and asked to indicate the direction of motion. A total of eleven participants underwent 3360 practice trials, distributed over twelve (Experiment 1) or six days (Experiment 2). We found no improvement in motion discrimination in both experiments. These results are surprising since perceptual learning has been demonstrated for visual, auditory, and somatosensory discrimination. Improvements in the same task were found when visual input was provided (Experiment 3). The multisensory nature of vestibular information is discussed as a possible explanation of the absence of perceptual learning in darkness. PMID:23392475

  20. Posture-based processing in visual short-term memory for actions.

    PubMed

    Vicary, Staci A; Stevens, Catherine J

    2014-01-01

    Visual perception of human action involves both form and motion processing, which may rely on partially dissociable neural networks. If form and motion are dissociable during visual perception, then they may also be dissociable during their retention in visual short-term memory (VSTM). To elicit form-plus-motion and form-only processing of dance-like actions, individual action frames can be presented in the correct or incorrect order. The former appears coherent and should elicit action perception, engaging both form and motion pathways, whereas the latter appears incoherent and should elicit posture perception, engaging form pathways alone. It was hypothesized that, if form and motion are dissociable in VSTM, then recognition of static body posture should be better after viewing incoherent than after viewing coherent actions. However, as VSTM is capacity limited, posture-based encoding of actions may be ineffective with increased number of items or frames. Using a behavioural change detection task, recognition of a single test posture was significantly more likely after studying incoherent than after studying coherent stimuli. However, this effect only occurred for spans of two (but not three) items and for stimuli with five (but not nine) frames. As in perception, posture and motion are dissociable in VSTM.

  1. Examining the Effect of Age on Visual-Vestibular Self-Motion Perception Using a Driving Paradigm.

    PubMed

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L

    2017-05-01

    Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual-vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual-vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.

  2. Motion perception without Nystagmus--a novel manifestation of cerebellar stroke.

    PubMed

    Shaikh, Aasef G

    2014-01-01

    The motion perception and the vestibulo-ocular reflex (VOR) each serve distinct functions. The VOR keeps the gaze steady on the target of interest, whereas vestibular perception serves a number of tasks, including awareness of self-motion and orientation in space. VOR and motion perception might abide the same neurophysiological principles, but their distinct anatomical correlates were proposed. In patients with cerebellar stroke in distribution of medial division of posterior inferior cerebellar artery, we asked whether specific location of the focal lesion in vestibulocerebellum could cause impaired perception of motion but normal eye movements. Thirteen patients were studied, 5 consistently perceived spinning of surrounding environment (vertigo), but the eye movements were normal. This group was called "disease model." Remaining 8 patients were also symptomatic for vertigo, but they had spontaneous nystagmus. The latter group was called "disease control." Magnetic resonance imaging in both groups consistently revealed focal cerebellar infarct affecting posterior cerebellar vermis (lobule IX). In the "disease model" group, only part of lobule IX was affected. In the disease control group, however, complete lobule IX was involved. This study discovered a novel presentation of cerebellar stroke where only motion perception was affected, but there was an absence of objective neurologic signs. Copyright © 2014 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  3. Sex differences in the development of brain mechanisms for processing biological motion.

    PubMed

    Anderson, L C; Bolling, D Z; Schelinski, S; Coffman, M C; Pelphrey, K A; Kaiser, M D

    2013-12-01

    Disorders related to social functioning including autism and schizophrenia differ drastically in incidence and severity between males and females. Little is known about the neural systems underlying these sex-linked differences in risk and resiliency. Using functional magnetic resonance imaging and a task involving the visual perception of point-light displays of coherent and scrambled biological motion, we discovered sex differences in the development of neural systems for basic social perception. In adults, we identified enhanced activity during coherent biological motion perception in females relative to males in a network of brain regions previously implicated in social perception including amygdala, medial temporal gyrus, and temporal pole. These sex differences were less pronounced in our sample of school-age youth. We hypothesize that the robust neural circuitry supporting social perception in females, which diverges from males beginning in childhood, may underlie sex differences in disorders related to social processing. © 2013 Elsevier Inc. All rights reserved.

  4. Sex Differences in the Development of Brain Mechanisms for Processing Biological Motion

    PubMed Central

    Anderson, L.C.; Bolling, D.Z.; Schelinski, S.; Coffman, M.C.; Pelphrey, K.A.; Kaiser, M.D.

    2013-01-01

    Disorders related to social functioning including autism and schizophrenia differ drastically in incidence and severity between males and females. Little is known about the neural systems underlying these sex-linked differences in risk and resiliency. Using functional magnetic resonance imaging and a task involving the visual perception of point-light displays of coherent and scrambled biological motion, we discovered sex differences in the development of neural systems for basic social perception. In adults, we identified enhanced activity during coherent biological motion perception in females relative to males in a network of brain regions previously implicated in social perception including amygdala, medial temporal gyrus, and temporal pole. These sex differences were less pronounced in our sample of school-age youth. We hypothesize that the robust neural circuitry supporting social perception in females, which diverges from males beginning in childhood, may underlie sex differences in disorders related to social processing. PMID:23876243

  5. Training in Contrast Detection Improves Motion Perception of Sinewave Gratings in Amblyopia

    PubMed Central

    Hou, Fang; Huang, Chang-bing; Tao, Liming; Feng, Lixia; Zhou, Yifeng; Lu, Zhong-Lin

    2011-01-01

    Purpose. One critical concern about using perceptual learning to treat amblyopia is whether training with one particular stimulus and task generalizes to other stimuli and tasks. In the spatial domain, it has been found that the bandwidth of contrast sensitivity improvement is much broader in amblyopes than in normals. Because previous studies suggested the local motion deficits in amblyopia are explained by the spatial vision deficits, the hypothesis for this study was that training in the spatial domain could benefit motion perception of sinewave gratings. Methods. Nine adult amblyopes (mean age, 22.1 ± 5.6 years) were trained in a contrast detection task in the amblyopic eye for 10 days. Visual acuity, spatial contrast sensitivity functions, and temporal modulation transfer functions (MTF) for sinewave motion detection and discrimination were measured for each eye before and after training. Eight adult amblyopes (mean age, 22.6 ± 6.7 years) served as control subjects. Results. In the amblyopic eye, training improved (1) contrast sensitivity by 6.6 dB (or 113.8%) across spatial frequencies, with a bandwidth of 4.4 octaves; (2) sensitivity of motion detection and discrimination by 3.2 dB (or 44.5%) and 3.7 dB (or 53.1%) across temporal frequencies, with bandwidths of 3.9 and 3.1 octaves, respectively; (3) visual acuity by 3.2 dB (or 44.5%). The fellow eye also showed a small amount of improvement in contrast sensitivities and no significant change in motion perception. Control subjects who received no training demonstrated no obvious improvement in any measure. Conclusions. The results demonstrate substantial plasticity in the amblyopic visual system, and provide additional empirical support for perceptual learning as a potential treatment for amblyopia. PMID:21693615

  6. Color Improves Speed of Processing But Not Perception in a Motion Illusion

    PubMed Central

    Perry, Carolyn J.; Fallah, Mazyar

    2012-01-01

    When two superimposed surfaces of dots move in different directions, the perceived directions are shifted away from each other. This perceptual illusion has been termed direction repulsion and is thought to be due to mutual inhibition between the representations of the two directions. It has further been shown that a speed difference between the two surfaces attenuates direction repulsion. As speed and direction are both necessary components of representing motion, the reduction in direction repulsion can be attributed to the additional motion information strengthening the representations of the two directions and thus reducing the mutual inhibition. We tested whether bottom-up attention and top-down task demands, in the form of color differences between the two surfaces, would also enhance motion processing, reducing direction repulsion. We found that the addition of color differences did not improve direction discrimination and reduce direction repulsion. However, we did find that adding a color difference improved performance on the task. We hypothesized that the performance differences were due to the limited presentation time of the stimuli. We tested this in a follow-up experiment where we varied the time of presentation to determine the duration needed to successfully perform the task with and without the color difference. As we expected, color segmentation reduced the amount of time needed to process and encode both directions of motion. Thus we find a dissociation between the effects of attention on the speed of processing and conscious perception of direction. We propose four potential mechanisms wherein color speeds figure-ground segmentation of an object, attentional switching between objects, direction discrimination and/or the accumulation of motion information for decision-making, without affecting conscious perception of the direction. Potential neural bases are also explored. PMID:22479255

  7. Color improves speed of processing but not perception in a motion illusion.

    PubMed

    Perry, Carolyn J; Fallah, Mazyar

    2012-01-01

    When two superimposed surfaces of dots move in different directions, the perceived directions are shifted away from each other. This perceptual illusion has been termed direction repulsion and is thought to be due to mutual inhibition between the representations of the two directions. It has further been shown that a speed difference between the two surfaces attenuates direction repulsion. As speed and direction are both necessary components of representing motion, the reduction in direction repulsion can be attributed to the additional motion information strengthening the representations of the two directions and thus reducing the mutual inhibition. We tested whether bottom-up attention and top-down task demands, in the form of color differences between the two surfaces, would also enhance motion processing, reducing direction repulsion. We found that the addition of color differences did not improve direction discrimination and reduce direction repulsion. However, we did find that adding a color difference improved performance on the task. We hypothesized that the performance differences were due to the limited presentation time of the stimuli. We tested this in a follow-up experiment where we varied the time of presentation to determine the duration needed to successfully perform the task with and without the color difference. As we expected, color segmentation reduced the amount of time needed to process and encode both directions of motion. Thus we find a dissociation between the effects of attention on the speed of processing and conscious perception of direction. We propose four potential mechanisms wherein color speeds figure-ground segmentation of an object, attentional switching between objects, direction discrimination and/or the accumulation of motion information for decision-making, without affecting conscious perception of the direction. Potential neural bases are also explored.

  8. Visual processing and social cognition in schizophrenia: relationships among eye movements, biological motion perception, and empathy.

    PubMed

    Matsumoto, Yukiko; Takahashi, Hideyuki; Murai, Toshiya; Takahashi, Hidehiko

    2015-01-01

    Schizophrenia patients have impairments at several levels of cognition including visual attention (eye movements), perception, and social cognition. However, it remains unclear how lower-level cognitive deficits influence higher-level cognition. To elucidate the hierarchical path linking deficient cognitions, we focused on biological motion perception, which is involved in both the early stage of visual perception (attention) and higher social cognition, and is impaired in schizophrenia. Seventeen schizophrenia patients and 18 healthy controls participated in the study. Using point-light walker stimuli, we examined eye movements during biological motion perception in schizophrenia. We assessed relationships among eye movements, biological motion perception and empathy. In the biological motion detection task, schizophrenia patients showed lower accuracy and fixated longer than healthy controls. As opposed to controls, patients exhibiting longer fixation durations and fewer numbers of fixations demonstrated higher accuracy. Additionally, in the patient group, the correlations between accuracy and affective empathy index and between eye movement index and affective empathy index were significant. The altered gaze patterns in patients indicate that top-down attention compensates for impaired bottom-up attention. Furthermore, aberrant eye movements might lead to deficits in biological motion perception and finally link to social cognitive impairments. The current findings merit further investigation for understanding the mechanism of social cognitive training and its development. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  9. Age Differences in Visual-Auditory Self-Motion Perception during a Simulated Driving Task

    PubMed Central

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L.

    2016-01-01

    Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e., optic flow) and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e., engine, tire, and wind sounds). Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion. PMID:27199829

  10. Age-related changes in perception of movement in driving scenes.

    PubMed

    Lacherez, Philippe; Turner, Laura; Lester, Robert; Burns, Zoe; Wood, Joanne M

    2014-07-01

    Age-related changes in motion sensitivity have been found to relate to reductions in various indices of driving performance and safety. The aim of this study was to investigate the basis of this relationship in terms of determining which aspects of motion perception are most relevant to driving. Participants included 61 regular drivers (age range 22-87 years). Visual performance was measured binocularly. Measures included visual acuity, contrast sensitivity and motion sensitivity assessed using four different approaches: (1) threshold minimum drift rate for a drifting Gabor patch, (2) Dmin from a random dot display, (3) threshold coherence from a random dot display, and (4) threshold drift rate for a second-order (contrast modulated) sinusoidal grating. Participants then completed the Hazard Perception Test (HPT) in which they were required to identify moving hazards in videos of real driving scenes, and also a Direction of Heading task (DOH) in which they identified deviations from normal lane keeping in brief videos of driving filmed from the interior of a vehicle. In bivariate correlation analyses, all motion sensitivity measures significantly declined with age. Motion coherence thresholds, and minimum drift rate threshold for the first-order stimulus (Gabor patch) both significantly predicted HPT performance even after controlling for age, visual acuity and contrast sensitivity. Bootstrap mediation analysis showed that individual differences in DOH accuracy partly explained these relationships, where those individuals with poorer motion sensitivity on the coherence and Gabor tests showed decreased ability to perceive deviations in motion in the driving videos, which related in turn to their ability to detect the moving hazards. The ability to detect subtle movements in the driving environment (as determined by the DOH task) may be an important contributor to effective hazard perception, and is associated with age, and an individuals' performance on tests of motion sensitivity. The locus of the processing deficits appears to lie in first-order, rather than second-order motion pathways. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.

  11. Time perception of visual motion is tuned by the motor representation of human actions

    PubMed Central

    Gavazzi, Gioele; Bisio, Ambra; Pozzo, Thierry

    2013-01-01

    Several studies have shown that the observation of a rapidly moving stimulus dilates our perception of time. However, this effect appears to be at odds with the fact that our interactions both with environment and with each other are temporally accurate. This work exploits this paradox to investigate whether the temporal accuracy of visual motion uses motor representations of actions. To this aim, the stimuli were a dot moving with kinematics belonging or not to the human motor repertoire and displayed at different velocities. Participants had to replicate its duration with two tasks differing in the underlying motor plan. Results show that independently of the task's motor plan, the temporal accuracy and precision depend on the correspondence between the stimulus' kinematics and the observer's motor competencies. Our data suggest that the temporal mechanism of visual motion exploits a temporal visuomotor representation tuned by the motor knowledge of human actions. PMID:23378903

  12. Spatial and temporal processing in healthy aging: implications for perceptions of driving skills.

    PubMed

    Conlon, Elizabeth; Herkes, Kathleen

    2008-07-01

    Sensitivity to the attributes of a stimulus (form or motion) and accuracy when detecting rapidly presented stimulus information were measured in older (N = 36) and younger (N = 37) groups. Before and after practice, the older group was significantly less sensitive to global motion (but not to form) and less accurate on a rapid sequencing task when detecting the individual elements presented in long but not short sequences. These effect sizes produced power for the different analyses that ranged between 0.5 and 1.00. The reduced sensitivity found among older individuals to temporal but not spatial stimuli, adds support to previous findings of a selective age-related deficit in temporal processing. Older women were significantly less sensitive than older men, younger men and younger women on the global motion task. Gender effects were evident when, in response to global motion stimuli, complex extraction and integration processes needed to be undertaken rapidly. Significant moderate correlations were found between age, global motion sensitivity and reports of perceptions of other vehicles and road signs when driving. These associations suggest that reduced motion sensitivity may produce functional difficulties for the older adults when judging speeds or estimating gaps in traffic while driving.

  13. Effects of simulator motion and visual characteristics on rotorcraft handling qualities evaluations

    NASA Technical Reports Server (NTRS)

    Mitchell, David G.; Hart, Daniel C.

    1993-01-01

    The pilot's perceptions of aircraft handling qualities are influenced by a combination of the aircraft dynamics, the task, and the environment under which the evaluation is performed. When the evaluation is performed in a groundbased simulator, the characteristics of the simulation facility also come into play. Two studies were conducted on NASA Ames Research Center's Vertical Motion Simulator to determine the effects of simulator characteristics on perceived handling qualities. Most evaluations were conducted with a baseline set of rotorcraft dynamics, using a simple transfer-function model of an uncoupled helicopter, under different conditions of visual time delays and motion command washout filters. Differences in pilot opinion were found as the visual and motion parameters were changed, reflecting a change in the pilots' perceptions of handling qualities, rather than changes in the aircraft model itself. The results indicate a need for tailoring the motion washout dynamics to suit the task. Visual-delay data are inconclusive but suggest that it may be better to allow some time delay in the visual path to minimize the mismatch between visual and motion, rather than eliminate the visual delay entirely through lead compensation.

  14. An Adaptive Neural Mechanism for Acoustic Motion Perception with Varying Sparsity

    PubMed Central

    Shaikh, Danish; Manoonpong, Poramate

    2017-01-01

    Biological motion-sensitive neural circuits are quite adept in perceiving the relative motion of a relevant stimulus. Motion perception is a fundamental ability in neural sensory processing and crucial in target tracking tasks. Tracking a stimulus entails the ability to perceive its motion, i.e., extracting information about its direction and velocity. Here we focus on auditory motion perception of sound stimuli, which is poorly understood as compared to its visual counterpart. In earlier work we have developed a bio-inspired neural learning mechanism for acoustic motion perception. The mechanism extracts directional information via a model of the peripheral auditory system of lizards. The mechanism uses only this directional information obtained via specific motor behaviour to learn the angular velocity of unoccluded sound stimuli in motion. In nature however the stimulus being tracked may be occluded by artefacts in the environment, such as an escaping prey momentarily disappearing behind a cover of trees. This article extends the earlier work by presenting a comparative investigation of auditory motion perception for unoccluded and occluded tonal sound stimuli with a frequency of 2.2 kHz in both simulation and practice. Three instances of each stimulus are employed, differing in their movement velocities–0.5°/time step, 1.0°/time step and 1.5°/time step. To validate the approach in practice, we implement the proposed neural mechanism on a wheeled mobile robot and evaluate its performance in auditory tracking. PMID:28337137

  15. How Do Changes in Speed Affect the Perception of Duration?

    ERIC Educational Resources Information Center

    Matthews, William J.

    2011-01-01

    Six experiments investigated how changes in stimulus speed influence subjective duration. Participants saw rotating or translating shapes in three conditions: constant speed, accelerating motion, and decelerating motion. The distance moved and average speed were the same in all three conditions. In temporal judgment tasks, the constant-speed…

  16. Spatial Attention and Audiovisual Interactions in Apparent Motion

    ERIC Educational Resources Information Center

    Sanabria, Daniel; Soto-Faraco, Salvador; Spence, Charles

    2007-01-01

    In this study, the authors combined the cross-modal dynamic capture task (involving the horizontal apparent movement of visual and auditory stimuli) with spatial cuing in the vertical dimension to investigate the role of spatial attention in cross-modal interactions during motion perception. Spatial attention was manipulated endogenously, either…

  17. Effects of spatial cues on color-change detection in humans

    PubMed Central

    Herman, James P.; Bogadhi, Amarender R.; Krauzlis, Richard J.

    2015-01-01

    Studies of covert spatial attention have largely used motion, orientation, and contrast stimuli as these features are fundamental components of vision. The feature dimension of color is also fundamental to visual perception, particularly for catarrhine primates, and yet very little is known about the effects of spatial attention on color perception. Here we present results using novel dynamic color stimuli in both discrimination and color-change detection tasks. We find that our stimuli yield comparable discrimination thresholds to those obtained with static stimuli. Further, we find that an informative spatial cue improves performance and speeds response time in a color-change detection task compared with an uncued condition, similar to what has been demonstrated for motion, orientation, and contrast stimuli. Our results demonstrate the use of dynamic color stimuli for an established psychophysical task and show that color stimuli are well suited to the study of spatial attention. PMID:26047359

  18. Interaction of Perceptual Grouping and Crossmodal Temporal Capture in Tactile Apparent-Motion

    PubMed Central

    Chen, Lihan; Shi, Zhuanghua; Müller, Hermann J.

    2011-01-01

    Previous studies have shown that in tasks requiring participants to report the direction of apparent motion, task-irrelevant mono-beeps can “capture” visual motion perception when the beeps occur temporally close to the visual stimuli. However, the contributions of the relative timing of multimodal events and the event structure, modulating uni- and/or crossmodal perceptual grouping, remain unclear. To examine this question and extend the investigation to the tactile modality, the current experiments presented tactile two-tap apparent-motion streams, with an SOA of 400 ms between successive, left-/right-hand middle-finger taps, accompanied by task-irrelevant, non-spatial auditory stimuli. The streams were shown for 90 seconds, and participants' task was to continuously report the perceived (left- or rightward) direction of tactile motion. In Experiment 1, each tactile stimulus was paired with an auditory beep, though odd-numbered taps were paired with an asynchronous beep, with audiotactile SOAs ranging from −75 ms to 75 ms. Perceived direction of tactile motion varied systematically with audiotactile SOA, indicative of a temporal-capture effect. In Experiment 2, two audiotactile SOAs—one short (75 ms), one long (325 ms)—were compared. The long-SOA condition preserved the crossmodal event structure (so the temporal-capture dynamics should have been similar to that in Experiment 1), but both beeps now occurred temporally close to the taps on one side (even-numbered taps). The two SOAs were found to produce opposite modulations of apparent motion, indicative of an influence of crossmodal grouping. In Experiment 3, only odd-numbered, but not even-numbered, taps were paired with auditory beeps. This abolished the temporal-capture effect and, instead, a dominant percept of apparent motion from the audiotactile side to the tactile-only side was observed independently of the SOA variation. These findings suggest that asymmetric crossmodal grouping leads to an attentional modulation of apparent motion, which inhibits crossmodal temporal-capture effects. PMID:21383834

  19. Fast transfer of crossmodal time interval training.

    PubMed

    Chen, Lihan; Zhou, Xiaolin

    2014-06-01

    Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.

  20. How long did it last? You would better ask a human

    PubMed Central

    Lacquaniti, Francesco; Carrozzo, Mauro; d’Avella, Andrea; La Scaleia, Barbara; Moscatelli, Alessandro; Zago, Myrka

    2014-01-01

    In the future, human-like robots will live among people to provide company and help carrying out tasks in cooperation with humans. These interactions require that robots understand not only human actions, but also the way in which we perceive the world. Human perception heavily relies on the time dimension, especially when it comes to processing visual motion. Critically, human time perception for dynamic events is often inaccurate. Robots interacting with humans may want to see the world and tell time the way humans do: if so, they must incorporate human-like fallacy. Observers asked to judge the duration of brief scenes are prone to errors: perceived duration often does not match the physical duration of the event. Several kinds of temporal distortions have been described in the specialized literature. Here we review the topic with a special emphasis on our work dealing with time perception of animate actors versus inanimate actors. This work shows the existence of specialized time bases for different categories of targets. The time base used by the human brain to process visual motion appears to be calibrated against the specific predictions regarding the motion of human figures in case of animate motion, while it can be calibrated against the predictions of motion of passive objects in case of inanimate motion. Human perception of time appears to be strictly linked with the mechanisms used to control movements. Thus, neural time can be entrained by external cues in a similar manner for both perceptual judgments of elapsed time and in motor control tasks. One possible strategy could be to implement in humanoids a unique architecture for dealing with time, which would apply the same specialized mechanisms to both perception and action, similarly to humans. This shared implementation might render the humanoids more acceptable to humans, thus facilitating reciprocal interactions. PMID:24478694

  1. How long did it last? You would better ask a human.

    PubMed

    Lacquaniti, Francesco; Carrozzo, Mauro; d'Avella, Andrea; La Scaleia, Barbara; Moscatelli, Alessandro; Zago, Myrka

    2014-01-01

    In the future, human-like robots will live among people to provide company and help carrying out tasks in cooperation with humans. These interactions require that robots understand not only human actions, but also the way in which we perceive the world. Human perception heavily relies on the time dimension, especially when it comes to processing visual motion. Critically, human time perception for dynamic events is often inaccurate. Robots interacting with humans may want to see the world and tell time the way humans do: if so, they must incorporate human-like fallacy. Observers asked to judge the duration of brief scenes are prone to errors: perceived duration often does not match the physical duration of the event. Several kinds of temporal distortions have been described in the specialized literature. Here we review the topic with a special emphasis on our work dealing with time perception of animate actors versus inanimate actors. This work shows the existence of specialized time bases for different categories of targets. The time base used by the human brain to process visual motion appears to be calibrated against the specific predictions regarding the motion of human figures in case of animate motion, while it can be calibrated against the predictions of motion of passive objects in case of inanimate motion. Human perception of time appears to be strictly linked with the mechanisms used to control movements. Thus, neural time can be entrained by external cues in a similar manner for both perceptual judgments of elapsed time and in motor control tasks. One possible strategy could be to implement in humanoids a unique architecture for dealing with time, which would apply the same specialized mechanisms to both perception and action, similarly to humans. This shared implementation might render the humanoids more acceptable to humans, thus facilitating reciprocal interactions.

  2. Psilocybin impairs high-level but not low-level motion perception.

    PubMed

    Carter, Olivia L; Pettigrew, John D; Burr, David C; Alais, David; Hasler, Felix; Vollenweider, Franz X

    2004-08-26

    The hallucinogenic serotonin(1A&2A) agonist psilocybin is known for its ability to induce illusions of motion in otherwise stationary objects or textured surfaces. This study investigated the effect of psilocybin on local and global motion processing in nine human volunteers. Using a forced choice direction of motion discrimination task we show that psilocybin selectively impairs coherence sensitivity for random dot patterns, likely mediated by high-level global motion detectors, but not contrast sensitivity for drifting gratings, believed to be mediated by low-level detectors. These results are in line with those observed within schizophrenic populations and are discussed in respect to the proposition that psilocybin may provide a model to investigate clinical psychosis and the pharmacological underpinnings of visual perception in normal populations.

  3. A Theoretical and Experimental Analysis of the Outside World Perception Process

    NASA Technical Reports Server (NTRS)

    Wewerinke, P. H.

    1978-01-01

    The outside scene is often an important source of information for manual control tasks. Important examples of these are car driving and aircraft control. This paper deals with modelling this visual scene perception process on the basis of linear perspective geometry and the relative motion cues. Model predictions utilizing psychophysical threshold data from base-line experiments and literature of a variety of visual approach tasks are compared with experimental data. Both the performance and workload results illustrate that the model provides a meaningful description of the outside world perception process, with a useful predictive capability.

  4. Action Video Games Improve Direction Discrimination of Parafoveal Translational Global Motion but Not Reaction Times.

    PubMed

    Pavan, Andrea; Boyce, Matthew; Ghin, Filippo

    2016-10-01

    Playing action video games enhances visual motion perception. However, there is psychophysical evidence that action video games do not improve motion sensitivity for translational global moving patterns presented in fovea. This study investigates global motion perception in action video game players and compares their performance to that of non-action video game players and non-video game players. Stimuli were random dot kinematograms presented in the parafovea. Observers discriminated the motion direction of a target random dot kinematogram presented in one of the four visual quadrants. Action video game players showed lower motion coherence thresholds than the other groups. However, when the task was performed at threshold, we did not find differences between groups in terms of distributions of reaction times. These results suggest that action video games improve visual motion sensitivity in the near periphery of the visual field, rather than speed response. © The Author(s) 2016.

  5. Probing links between action perception and action production in Parkinson's disease using Fitts' law.

    PubMed

    Sakurada, Takeshi; Knoblich, Guenther; Sebanz, Natalie; Muramatsu, Shin-Ichi; Hirai, Masahiro

    2018-03-01

    Information on how the subcortical brain encodes information required to execute actions or to evaluate others' actions remains scanty. To clarify this link, Fitts'-law tasks for perception and execution were tested in patients with Parkinson's disease (PD). For the perception task, participants were shown apparent motion displays of a person moving their arm between two identical targets and reported whether they judged that the person could realistically move at the perceived speed without missing the targets. For the motor task, participants were required to touch the two targets as quickly and accurately as possible, similarly to the person observed in the perception task. In both tasks, the PD group exhibited, or imputed to others, significantly slower performances than those of the control group. However, in both groups, the relationships of perception and execution with task difficulty were exactly those predicted by Fitts' law. This suggests that despite dysfunction of the subcortical region, motor simulation abilities reflected mechanisms of compensation in the PD group. Moreover, we found that patients with PD had difficulty in switching their strategy for estimating others' actions when asked to do so. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. The Responsiveness of Biological Motion Processing Areas to Selective Attention Towards Goals

    PubMed Central

    Herrington, John; Nymberg, Charlotte; Faja, Susan; Price, Elinora; Schultz, Robert

    2012-01-01

    A growing literature indicates that visual cortex areas viewed as primarily responsive to exogenous stimuli are susceptible to top-down modulation by selective attention. The present study examines whether brain areas involved in biological motion perception are among these areas – particularly with respect to selective attention towards human movement goals. Fifteen participants completed a point-light biological motion study following a two-by-two factorial design, with one factor representing an exogenous manipulation of human movement goals (goal-directed versus random movement), and the other an endogenous manipulation (a goal identification task versus an ancillary color-change task). Both manipulations yielded increased activation in the human homologue of motion-sensitive area MT+ (hMT+) as well as the extrastriate body area (EBA). The endogenous manipulation was associated with increased right posterior superior temporal sulcus (STS) activation, whereas the exogenous manipulation was associated with increased activation in left posterior STS. Selective attention towards goals activated portion of left hMT+/EBA only during the perception of purposeful movement consistent with emerging theories associating this area with the matching of visual motion input to known goal-directed actions. The overall pattern of results indicates that attention towards the goals of human movement activates biological motion areas. Ultimately, selective attention may explain why some studies examining biological motion show activation in hMT+ and EBA, even when using control stimuli with comparable motion properties. PMID:22796987

  7. Time-Perception Network and Default Mode Network Are Associated with Temporal Prediction in a Periodic Motion Task

    PubMed Central

    Carvalho, Fabiana M.; Chaim, Khallil T.; Sanchez, Tiago A.; de Araujo, Draulio B.

    2016-01-01

    The updating of prospective internal models is necessary to accurately predict future observations. Uncertainty-driven internal model updating has been studied using a variety of perceptual paradigms, and have revealed engagement of frontal and parietal areas. In a distinct literature, studies on temporal expectations have also characterized a time-perception network, which relies on temporal orienting of attention. However, the updating of prospective internal models is highly dependent on temporal attention, since temporal attention must be reoriented according to the current environmental demands. In this study, we used functional magnetic resonance imaging (fMRI) to evaluate to what extend the continuous manipulation of temporal prediction would recruit update-related areas and the time-perception network areas. We developed an exogenous temporal task that combines rhythm cueing and time-to-contact principles to generate implicit temporal expectation. Two patterns of motion were created: periodic (simple harmonic oscillation) and non-periodic (harmonic oscillation with variable acceleration). We found that non-periodic motion engaged the exogenous temporal orienting network, which includes the ventral premotor and inferior parietal cortices, and the cerebellum, as well as the presupplementary motor area, which has previously been implicated in internal model updating, and the motion-sensitive area MT+. Interestingly, we found a right-hemisphere preponderance suggesting the engagement of explicit timing mechanisms. We also show that the periodic motion condition, when compared to the non-periodic motion, activated a particular subset of the default-mode network (DMN) midline areas, including the left dorsomedial prefrontal cortex (DMPFC), anterior cingulate cortex (ACC), and bilateral posterior cingulate cortex/precuneus (PCC/PC). It suggests that the DMN plays a role in processing contextually expected information and supports recent evidence that the DMN may reflect the validation of prospective internal models and predictive control. Taken together, our findings suggest that continuous manipulation of temporal predictions engages representations of temporal prediction as well as task-independent updating of internal models. PMID:27313526

  8. Evaluation of simulation motion fidelity criteria in the vertical and directional axes

    NASA Technical Reports Server (NTRS)

    Schroeder, Jeffery A.

    1993-01-01

    An evaluation of existing motion fidelity criteria was conducted on the NASA Ames Vertical Motion Simulator. Experienced test pilots flew single-axis repositioning tasks in both the vertical and the directional axes. Using a first-order approximation of a hovering helicopter, tasks were flown with variations only in the filters that attenuate the commands to the simulator motion system. These filters had second-order high-pass characteristics, and the variations were made in the filter gain and natural frequency. The variations spanned motion response characteristics from nearly full math-model motion to fixed-base. Between configurations, pilots recalibrated their motion response perception by flying the task with full motion. Pilots subjectively rated the motion fidelity of subsequent configurations relative to this full motion case, which was considered the standard for comparison. The results suggested that the existing vertical-axis criterion was accurate for combinations of gain and natural frequency changes. However, if only the gain or the natural frequency was changed, the rated motion fidelity was better than the criterion predicted. In the vertical axis, the objective and subjective results indicated that a larger gain reduction was tolerated than the existing criterion allowed. The limited data collected in the yaw axis revealed that pilots had difficulty in distinguishing among the variations in the pure yaw motion cues.

  9. Perception Measurement in Clinical Trials of Schizophrenia: Promising Paradigms From CNTRICS

    PubMed Central

    Green, Michael F.; Butler, Pamela D.; Chen, Yue; Geyer, Mark A.; Silverstein, Steven; Wynn, Jonathan K.; Yoon, Jong H.; Zemon, Vance

    2009-01-01

    The third meeting of the Cognitive Neuroscience Treatment Research to Improve Cognition in Schizophrenia (CNTRICS) focused on selecting promising measures for each of the cognitive constructs selected in the first CNTRICS meeting. In the domain of perception, the 2 constructs of interest were gain control and visual integration. CNTRICS received 5 task nominations for gain control and three task nominations for visual integration. The breakout group for perception evaluated the degree to which each of these tasks met prespecified criteria. For gain control, the breakout group for perception believed that 2 of the tasks (prepulse inhibition of startle and mismatch negativity) were already mature and in the process of being incorporated into multisite clinical trials. However, the breakout group recommended that steady-state visual-evoked potentials be combined with contrast sensitivity to magnocellular vs parvocellular biased stimuli and that this combined task and the contrast-contrast effect task be recommended for translation for use in clinical trial contexts in schizophrenia research. For visual integration, the breakout group recommended the Contour Integration and Coherent Motion tasks for translation for use in clinical trials. This manuscript describes the ways in which each of these tasks met the criteria used by the breakout group to evaluate and recommend tasks for further development. PMID:19023123

  10. Translation and articulation in biological motion perception.

    PubMed

    Masselink, Jana; Lappe, Markus

    2015-08-01

    Recent models of biological motion processing focus on the articulational aspect of human walking investigated by point-light figures walking in place. However, in real human walking, the change in the position of the limbs relative to each other (referred to as articulation) results in a change of body location in space over time (referred to as translation). In order to examine the role of this translational component on the perception of biological motion we designed three psychophysical experiments of facing (leftward/rightward) and articulation discrimination (forward/backward and leftward/rightward) of a point-light walker viewed from the side, varying translation direction (relative to articulation direction), the amount of local image motion, and trial duration. In a further set of a forward/backward and a leftward/rightward articulation task, we additionally tested the influence of translational speed, including catch trials without articulation. We found a perceptual bias in translation direction in all three discrimination tasks. In the case of facing discrimination the bias was limited to short stimulus presentation. Our results suggest an interaction of articulation analysis with the processing of translational motion leading to best articulation discrimination when translational direction and speed match articulation. Moreover, we conclude that the global motion of the center-of-mass of the dot pattern is more relevant to processing of translation than the local motion of the dots. Our findings highlight that translation is a relevant cue that should be integrated in models of human motion detection.

  11. The responsiveness of biological motion processing areas to selective attention towards goals.

    PubMed

    Herrington, John; Nymberg, Charlotte; Faja, Susan; Price, Elinora; Schultz, Robert

    2012-10-15

    A growing literature indicates that visual cortex areas viewed as primarily responsive to exogenous stimuli are susceptible to top-down modulation by selective attention. The present study examines whether brain areas involved in biological motion perception are among these areas-particularly with respect to selective attention towards human movement goals. Fifteen participants completed a point-light biological motion study following a two-by-two factorial design, with one factor representing an exogenous manipulation of human movement goals (goal-directed versus random movement), and the other an endogenous manipulation (a goal identification task versus an ancillary color-change task). Both manipulations yielded increased activation in the human homologue of motion-sensitive area MT+ (hMT+) as well as the extrastriate body area (EBA). The endogenous manipulation was associated with increased right posterior superior temporal sulcus (STS) activation, whereas the exogenous manipulation was associated with increased activation in left posterior STS. Selective attention towards goals activated a portion of left hMT+/EBA only during the perception of purposeful movement-consistent with emerging theories associating this area with the matching of visual motion input to known goal-directed actions. The overall pattern of results indicates that attention towards the goals of human movement activates biological motion areas. Ultimately, selective attention may explain why some studies examining biological motion show activation in hMT+ and EBA, even when using control stimuli with comparable motion properties. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Audio–visual interactions for motion perception in depth modulate activity in visual area V3A

    PubMed Central

    Ogawa, Akitoshi; Macaluso, Emiliano

    2013-01-01

    Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414

  13. Auditory perception of a human walker.

    PubMed

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  14. Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion

    PubMed Central

    Fajen, Brett R.; Matthis, Jonathan S.

    2013-01-01

    Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983

  15. Reversed stereo depth and motion direction with anti-correlated stimuli.

    PubMed

    Read, J C; Eagle, R A

    2000-01-01

    We used anti-correlated stimuli to compare the correspondence problem in stereo and motion. Subjects performed a two-interval forced-choice disparity/motion direction discrimination task for different displacements. For anti-correlated 1d band-pass noise, we found weak reversed depth and motion. With 2d anti-correlated stimuli, stereo performance was impaired, but the perception of reversed motion was enhanced. We can explain the main features of our data in terms of channels tuned to different spatial frequencies and orientation. We suggest that a key difference between the solution of the correspondence problem by the motion and stereo systems concerns the integration of information at different orientations.

  16. Developmental changes of misconception and misperception of projectiles.

    PubMed

    Kim, In-Kyeong

    2012-12-01

    This study investigated the developmental changes of perceptual and cognitive commonsense physical knowledge. Children 4 to 9 years old (N = 156; 79 boys, 77 girls) participated. Each child was asked to predict the landing positions of balls that rolled down and fell off a virtual ramp and to choose the most natural-looking motion from different projectile motions depicted. The landing position of the most natural-looking projectile was compared with the predicted landing position and also compared with the actual landing position. The results showed children predicted the ball's landing position closer to the ramp than the actual position. Children also chose the depiction in which the ball fell closer to the ramp than the accurate position, although the error in the prediction task was larger than in the perception task and decreased with age. The results indicated the developmental convergence of explicit reasoning and implicit perception, which suggest a single knowledge system with representational re-description.

  17. Human activity discrimination for maritime application

    NASA Astrophysics Data System (ADS)

    Boettcher, Evelyn; Deaver, Dawne M.; Krapels, Keith

    2008-04-01

    The US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) is investigating how motion affects the target acquisition model (NVThermIP) sensor performance estimates. This paper looks specifically at estimating sensor performance for the task of discriminating human activities on watercraft, and was sponsored by the Office of Naval Research (ONR). Traditionally, sensor models were calibrated using still images. While that approach is sufficient for static targets, video allows one to use motion cues to aid in discerning the type of human activity more quickly and accurately. This, in turn, will affect estimated sensor performance and these effects are measured in order to calibrate current target acquisition models for this task. The study employed an eleven alternative forced choice (11AFC) human perception experiment to measure the task difficulty of discriminating unique human activities on watercrafts. A mid-wave infrared camera was used to collect video at night. A description of the construction of this experiment is given, including: the data collection, image processing, perception testing and how contrast was defined for video. These results are applicable to evaluate sensor field performance for Anti-Terrorism and Force Protection (AT/FP) tasks for the U.S. Navy.

  18. Inattention blindness to motion in area MT

    PubMed Central

    Harrison, Ian T.; Weiner, Katherine F.; Ghose, Geoffrey M.

    2013-01-01

    Subjects naturally form and use expectations to solve familiar tasks, but the accuracy of these expectations, and the neuronal mechanisms by which these expectations enhance behavior, are unclear. We trained animals (Macaca mulatta) in a challenging perceptual task in which the likelihood of a very brief pulse of motion was consistently modulated over time and space. Pulse likelihood had dramatic effects on behavior: unexpected pulses were nearly invisible to the animals. To examine the neuronal basis of such inattention blindness, we recorded from single neurons in the middle temporal (MT) area, an area related to motion perception. Fluctuations in how reliably MT neurons both signaled stimulus events and predicted behavioral choices were highly correlated with changes in performance over the course of individual trials. A simple neuronal pooling model reveals the dramatic behavioral effects of attention in this task can be completely explained by changes in the reliability of a small number of MT neurons. PMID:23658178

  19. Altered perceptual sensitivity to kinematic invariants in Parkinson's disease.

    PubMed

    Dayan, Eran; Inzelberg, Rivka; Flash, Tamar

    2012-01-01

    Ample evidence exists for coupling between action and perception in neurologically healthy individuals, yet the precise nature of the internal representations shared between these domains remains unclear. One experimentally derived view is that the invariant properties and constraints characterizing movement generation are also manifested during motion perception. One prominent motor invariant is the "two-third power law," describing the strong relation between the kinematics of motion and the geometrical features of the path followed by the hand during planar drawing movements. The two-thirds power law not only characterizes various movement generation tasks but also seems to constrain visual perception of motion. The present study aimed to assess whether motor invariants, such as the two thirds power law also constrain motion perception in patients with Parkinson's disease (PD). Patients with PD and age-matched controls were asked to observe the movement of a light spot rotating on an elliptical path and to modify its velocity until it appeared to move most uniformly. As in previous reports controls tended to choose those movements close to obeying the two-thirds power law as most uniform. Patients with PD displayed a more variable behavior, choosing on average, movements closer but not equal to a constant velocity. Our results thus demonstrate impairments in how the two-thirds power law constrains motion perception in patients with PD, where this relationship between velocity and curvature appears to be preserved but scaled down. Recent hypotheses on the role of the basal ganglia in motor timing may explain these irregularities. Alternatively, these impairments in perception of movement may reflect similar deficits in motor production.

  20. Social forces for team coordination in ball possession game

    NASA Astrophysics Data System (ADS)

    Yokoyama, Keiko; Shima, Hiroyuki; Fujii, Keisuke; Tabuchi, Noriyuki; Yamamoto, Yuji

    2018-02-01

    Team coordination is a basic human behavioral trait observed in many real-life communities. To promote teamwork, it is important to cultivate social skills that elicit team coordination. In the present work, we consider which social skills are indispensable for individuals performing a ball possession game in soccer. We develop a simple social force model that describes the synchronized motion of offensive players. Comparing the simulation results with experimental observations, we uncovered that the cooperative social force, a measure of perception skill, has the most important role in reproducing the harmonized collective motion of experienced players in the task. We further developed an experimental tool that facilitates real players' perceptions of interpersonal distance, revealing that the tool improves novice players' motions as if the cooperative social force were imposed.

  1. Near-optimal integration of facial form and motion.

    PubMed

    Dobs, Katharina; Ma, Wei Ji; Reddy, Leila

    2017-09-08

    Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.

  2. The Reactivation of Motion influences Size Categorization in a Visuo-Haptic Illusion.

    PubMed

    Rey, Amandine E; Dabic, Stephanie; Versace, Remy; Navarro, Jordan

    2016-09-01

    People simulate themselves moving when they view a picture, read a sentence, or simulate a situation that involves motion. The simulation of motion has often been studied in conceptual tasks such as language comprehension. However, most of these studies investigated the direct influence of motion simulation on tasks inducing motion. This article investigates whether a mo- tion induced by the reactivation of a dynamic picture can influence a task that did not require motion processing. In a first phase, a dynamic picture and a static picture were systematically presented with a vibrotactile stimulus (high or low frequency). The second phase of the experiment used a priming paradigm in which a vibrotactile stimulus was presented alone and followed by pictures of objects. Participants had to categorize objects as large or small relative to their typical size (simulated size). Results showed that when the target object was preceded by the vibrotactile stimulus previously associated with the dynamic picture, participants perceived all the objects as larger and categorized them more quickly when the objects were typically "large" and more slowly when the objects were typically "small." In light of embodied cognition theories, this bias in participants' perception is assumed to be caused by an induced forward motion. generated by the reactivated dynamic picture, which affects simulation of the size of the objects.

  3. The role of eye movements in depth from motion parallax during infancy

    PubMed Central

    Nawrot, Elizabeth; Nawrot, Mark

    2013-01-01

    Motion parallax is a motion-based, monocular depth cue that uses an object's relative motion and velocity as a cue to relative depth. In adults, and in monkeys, a smooth pursuit eye movement signal is used to disambiguate the depth-sign provided by these relative motion cues. The current study investigates infants' perception of depth from motion parallax and the development of two oculomotor functions, smooth pursuit and the ocular following response (OFR) eye movements. Infants 8 to 20 weeks of age were presented with three tasks in a single session: depth from motion parallax, smooth pursuit tracking, and OFR to translation. The development of smooth pursuit was significantly related to age, as was sensitivity to motion parallax. OFR eye movements also corresponded to both age and smooth pursuit gain, with groups of infants demonstrating asymmetric function in both types of eye movements. These results suggest that the development of the eye movement system may play a crucial role in the sensitivity to depth from motion parallax in infancy. Moreover, describing the development of these oculomotor functions in relation to depth perception may aid in the understanding of certain visual dysfunctions. PMID:24353309

  4. Do rhesus monkeys (Macaca mulatta) perceive illusory motion?

    PubMed

    Agrillo, Christian; Gori, Simone; Beran, Michael J

    2015-07-01

    During the last decade, visual illusions have been used repeatedly to understand similarities and differences in visual perception of human and non-human animals. However, nearly all studies have focused only on illusions not related to motion perception, and to date, it is unknown whether non-human primates perceive any kind of motion illusion. In the present study, we investigated whether rhesus monkeys (Macaca mulatta) perceived one of the most popular motion illusions in humans, the Rotating Snake illusion (RSI). To this purpose, we set up four experiments. In Experiment 1, subjects initially were trained to discriminate static versus dynamic arrays. Once reaching the learning criterion, they underwent probe trials in which we presented the RSI and a control stimulus identical in overall configuration with the exception that the order of the luminance sequence was changed in a way that no apparent motion is perceived by humans. The overall performance of monkeys indicated that they spontaneously classified RSI as a dynamic array. Subsequently, we tested adult humans in the same task with the aim of directly comparing the performance of human and non-human primates (Experiment 2). In Experiment 3, we found that monkeys can be successfully trained to discriminate between the RSI and a control stimulus. Experiment 4 showed that a simple change in luminance sequence in the two arrays could not explain the performance reported in Experiment 3. These results suggest that some rhesus monkeys display a human-like perception of this motion illusion, raising the possibility that the neurocognitive systems underlying motion perception may be similar between human and non-human primates.

  5. Shared motion signals for human perceptual decisions and oculomotor actions

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Krauzlis, Richard J.

    2003-01-01

    A fundamental question in primate neurobiology is to understand to what extent motor behaviors are driven by shared neural signals that also support conscious perception or by independent subconscious neural signals dedicated to motor control. Although it has clearly been established that cortical areas involved in processing visual motion support both perception and smooth pursuit eye movements, it remains unknown whether the same or different sets of neurons within these structures perform these two functions. Examination of the trial-by-trial variation in human perceptual and pursuit responses during a simultaneous psychophysical and oculomotor task reveals that the direction signals for pursuit and perception are not only similar on average but also co-vary on a trial-by-trial basis, even when performance is at or near chance and the decisions are determined largely by neural noise. We conclude that the neural signal encoding the direction of target motion that drives steady-state pursuit and supports concurrent perceptual judgments emanates from a shared ensemble of cortical neurons.

  6. Visual Motion Processing Subserves Faster Visuomotor Reaction in Badminton Players.

    PubMed

    Hülsdünker, Thorben; Strüder, Heiko K; Mierau, Andreas

    2017-06-01

    Athletes participating in ball or racquet sports have to respond to visual stimuli under critical time pressure. Previous studies used visual contrast stimuli to determine visual perception and visuomotor reaction in athletes and nonathletes; however, ball and racquet sports are characterized by motion rather than contrast visual cues. Because visual contrast and motion signals are processed in different cortical regions, this study aimed to determine differences in perception and processing of visual motion between athletes and nonathletes. Twenty-five skilled badminton players and 28 age-matched nonathletic controls participated in this study. Using a 64-channel EEG system, we investigated visual motion perception/processing in the motion-sensitive middle temporal (MT) cortical area in response to radial motion of different velocities. In a simple visuomotor reaction task, visuomotor transformation in Brodmann area 6 (BA6) and BA4 as well as muscular activation (EMG onset) and visuomotor reaction time (VMRT) were investigated. Stimulus- and response-locked potentials were determined to differentiate between perceptual and motor-related processes. As compared with nonathletes, athletes showed earlier EMG onset times (217 vs 178 ms, P < 0.001), accompanied by a faster VMRT (274 vs 243 ms, P < 0.001). Furthermore, athletes showed an earlier stimulus-locked peak activation of MT (200 vs 182 ms, P = 0.002) and BA6 (161 vs 137 ms, P = 0.009). Response-locked peak activation in MT was later in athletes (-7 vs 26 ms, P < 0.001), whereas no group differences were observed in BA6 and BA4. Multiple regression analyses with stimulus- and response-locked cortical potentials predicted EMG onset (r = 0.83) and VMRT (r = 0.77). The athletes' superior visuomotor performance in response to visual motion is primarily related to visual perception and, to a minor degree, to motor-related processes.

  7. The influence of sleep deprivation and oscillating motion on sleepiness, motion sickness, and cognitive and motor performance.

    PubMed

    Kaplan, Janna; Ventura, Joel; Bakshi, Avijit; Pierobon, Alberto; Lackner, James R; DiZio, Paul

    2017-01-01

    Our goal was to determine how sleep deprivation, nauseogenic motion, and a combination of motion and sleep deprivation affect cognitive vigilance, visual-spatial perception, motor learning and retention, and balance. We exposed four groups of subjects to different combinations of normal 8h sleep or 4h sleep for two nights combined with testing under stationary conditions or during 0.28Hz horizontal linear oscillation. On the two days following controlled sleep, all subjects underwent four test sessions per day that included evaluations of fatigue, motion sickness, vigilance, perceptual discrimination, perceptual learning, motor performance and learning, and balance. Sleep loss and exposure to linear oscillation had additive or multiplicative relationships to sleepiness, motion sickness severity, decreases in vigilance and in perceptual discrimination and learning. Sleep loss also decelerated the rate of adaptation to motion sickness over repeated sessions. Sleep loss degraded the capacity to compensate for novel robotically induced perturbations of reaching movements but did not adversely affect adaptive recovery of accurate reaching. Overall, tasks requiring substantial attention to cognitive and motor demands were degraded more than tasks that were more automatic. Our findings indicate that predicting performance needs to take into account in addition to sleep loss, the attentional demands and novelty of tasks, the motion environment in which individuals will be performing and their prior susceptibility to motion sickness during exposure to provocative motion stimulation. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Man-systems evaluation of moving base vehicle simulation motion cues. [human acceleration perception involving visual feedback

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, M.; Brye, R. G.

    1974-01-01

    A motion cue investigation program is reported that deals with human factor aspects of high fidelity vehicle simulation. General data on non-visual motion thresholds and specific threshold values are established for use as washout parameters in vehicle simulation. A general purpose similator is used to test the contradictory cue hypothesis that acceleration sensitivity is reduced during a vehicle control task involving visual feedback. The simulator provides varying acceleration levels. The method of forced choice is based on the theory of signal detect ability.

  9. As time passes by: Observed motion-speed and psychological time during video playback.

    PubMed

    Nyman, Thomas Jonathan; Karlsson, Eric Per Anders; Antfolk, Jan

    2017-01-01

    Research shows that psychological time (i.e., the subjective experience and assessment of the passage of time) is malleable and that the central nervous system re-calibrates temporal information in accordance with situational factors so that psychological time flows slower or faster. Observed motion-speed (e.g., the visual perception of a rolling ball) is an important situational factor which influences the production of time estimates. The present study examines previous findings showing that observed slow and fast motion-speed during video playback respectively results in over- and underproductions of intervals of time. Here, we investigated through three separate experiments: a) the main effect of observed motion-speed during video playback on a time production task and b) the interactive effect of the frame rate (frames per second; fps) and motion-speed during video playback on a time production task. No main effect of video playback-speed or interactive effect between video playback-speed and frame rate was found on time production.

  10. As time passes by: Observed motion-speed and psychological time during video playback

    PubMed Central

    Karlsson, Eric Per Anders; Antfolk, Jan

    2017-01-01

    Research shows that psychological time (i.e., the subjective experience and assessment of the passage of time) is malleable and that the central nervous system re-calibrates temporal information in accordance with situational factors so that psychological time flows slower or faster. Observed motion-speed (e.g., the visual perception of a rolling ball) is an important situational factor which influences the production of time estimates. The present study examines previous findings showing that observed slow and fast motion-speed during video playback respectively results in over- and underproductions of intervals of time. Here, we investigated through three separate experiments: a) the main effect of observed motion-speed during video playback on a time production task and b) the interactive effect of the frame rate (frames per second; fps) and motion-speed during video playback on a time production task. No main effect of video playback-speed or interactive effect between video playback-speed and frame rate was found on time production. PMID:28614353

  11. Motion facilitates face perception across changes in viewpoint and expression in older adults.

    PubMed

    Maguinness, Corrina; Newell, Fiona N

    2014-12-01

    Faces are inherently dynamic stimuli. However, face perception in younger adults appears to be mediated by the ability to extract structural cues from static images and a benefit of motion is inconsistent. In contrast, static face processing is poorer and more image-dependent in older adults. We therefore compared the role of facial motion in younger and older adults to assess whether motion can enhance perception when static cues are insufficient. In our studies, older and younger adults learned faces presented in motion or in a sequence of static images, containing rigid (viewpoint) or nonrigid (expression) changes. Immediately following learning, participants matched a static test image to the learned face which varied by viewpoint (Experiment 1) or expression (Experiment 2) and was either learned or novel. First, we found an age effect with better face matching performance in younger than in older adults. However, we observed face matching performance improved in the older adult group, across changes in viewpoint and expression, when faces were learned in motion relative to static presentation. There was no benefit for facial (nonrigid) motion when the task involved matching inverted faces (Experiment 3), suggesting that the ability to use dynamic face information for the purpose of recognition reflects motion encoding which is specific to upright faces. Our results suggest that ageing may offer a unique insight into how dynamic cues support face processing, which may not be readily observed in younger adults' performance. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  12. Characterization of a laboratory model of computer mouse use - applications for studying risk factors for musculoskeletal disorders.

    PubMed

    Flodgren, G; Heiden, M; Lyskov, E; Crenshaw, A G

    2007-03-01

    In the present study, we assessed the wrist kinetics (range of motion, mean position, velocity and mean power frequency in radial/ulnar deviation, flexion/extension, and pronation/supination) associated with performing a mouse-operated computerized task involving painting rectangles on a computer screen. Furthermore, we evaluated the effects of the painting task on subjective perception of fatigue and wrist position sense. The results showed that the painting task required constrained wrist movements, and repetitive movements of about the same magnitude as those performed in mouse-operated design tasks. In addition, the painting task induced a perception of muscle fatigue in the upper extremity (Borg CR-scale: 3.5, p<0.001) and caused a reduction in the position sense accuracy of the wrist (error before: 4.6 degrees , error after: 5.6 degrees , p<0.05). This standardized painting task appears suitable for studying relevant risk factors, and therefore it offers a potential for investigating the pathophysiological mechanisms behind musculoskeletal disorders related to computer mouse use.

  13. Pitch body orientation influences the perception of self-motion direction induced by optic flow.

    PubMed

    Bourrelly, A; Vercher, J-L; Bringoux, L

    2010-10-04

    We studied the effect of static pitch body tilts on the perception of self-motion direction induced by a visual stimulus. Subjects were seated in front of a screen on which was projected a 3D cluster of moving dots visually simulating a forward motion of the observer with upward or downward directional biases (relative to a true earth horizontal direction). The subjects were tilted at various angles relative to gravity and were asked to estimate the direction of the perceived motion (nose-up, as during take-off or nose-down, as during landing). The data showed that body orientation proportionally affected the amount of error in the reported perceived direction (by 40% of body tilt magnitude in a range of +/-20 degrees) and these errors were systematically recorded in the direction of body tilt. As a consequence, a same visual stimulus was differently interpreted depending on body orientation. While the subjects were required to perform the task in a geocentric reference frame (i.e., relative to a gravity-related direction), they were obviously influenced by egocentric references. These results suggest that the perception of self-motion is not elaborated within an exclusive reference frame (either egocentric or geocentric) but rather results from the combined influence of both. (c) 2010 Elsevier Ireland Ltd. All rights reserved.

  14. Human Guidance Behavior Decomposition and Modeling

    NASA Astrophysics Data System (ADS)

    Feit, Andrew James

    Trained humans are capable of high performance, adaptable, and robust first-person dynamic motion guidance behavior. This behavior is exhibited in a wide variety of activities such as driving, piloting aircraft, skiing, biking, and many others. Human performance in such activities far exceeds the current capability of autonomous systems in terms of adaptability to new tasks, real-time motion planning, robustness, and trading safety for performance. The present work investigates the structure of human dynamic motion guidance that enables these performance qualities. This work uses a first-person experimental framework that presents a driving task to the subject, measuring control inputs, vehicle motion, and operator visual gaze movement. The resulting data is decomposed into subspace segment clusters that form primitive elements of action-perception interactive behavior. Subspace clusters are defined by both agent-environment system dynamic constraints and operator control strategies. A key contribution of this work is to define transitions between subspace cluster segments, or subgoals, as points where the set of active constraints, either system or operator defined, changes. This definition provides necessary conditions to determine transition points for a given task-environment scenario that allow a solution trajectory to be planned from known behavior elements. In addition, human gaze behavior during this task contains predictive behavior elements, indicating that the identified control modes are internally modeled. Based on these ideas, a generative, autonomous guidance framework is introduced that efficiently generates optimal dynamic motion behavior in new tasks. The new subgoal planning algorithm is shown to generate solutions to certain tasks more quickly than existing approaches currently used in robotics.

  15. Three-dimensional high-definition neuroendoscopic surgery: a controlled comparative laboratory study with two-dimensional endoscopy and clinical application.

    PubMed

    Inoue, Daisuke; Yoshimoto, Koji; Uemura, Munenori; Yoshida, Masaki; Ohuchida, Kenoki; Kenmotsu, Hajime; Tomikawa, Morimasa; Sasaki, Tomio; Hashizume, Makoto

    2013-11-01

    The purpose of this research was to investigate the usefulness of three-dimensional (3D) endoscopy compared with two-dimensional (2D) endoscopy in neuroendoscopic surgeries in a comparative study and to test the clinical applications. Forty-three examinees were divided into three groups according to their endoscopic experience: novice, beginner, or expert. Examinees performed three separate tasks using 3D and 2D endoscopy. A recently developed 3D high-definition (HD) neuroendoscope, 4.7 mm in diameter (Shinko Optical Co., Ltd., Tokyo, Japan) was used. In one of the three tasks, we developed a full-sized skull model of acrylic-based plastic using a 3D printer and a patient's thin slice computed tomography data, and evaluated the execution time and total path length of the tip of the pointer using an optical tracking system. Sixteen patients underwent endoscopic transnasal transsphenoidal pituitary surgery using both 3D and 2D endoscopy. Horizontal motion was evaluated using task 1, and anteroposterior motion was evaluated with task 3. Execution time and total path length in task 3 using the 3D system in both novice and beginner groups were significantly shorter than with the 2D system (p < 0.05), although no significant difference between 2D and 3D systems in task 1 was seen. In both the novice and beginner groups, the 3D system was better for depth perception than horizontal motion. No difference was seen in the expert group in this regard. The 3D HD endoscope was used for the pituitary surgery and was found very useful to identify the spatial relationship of carotid arteries and bony structures. The use of a 3D neuroendoscope improved depth perception and task performance. Our results suggest that 3D endoscopes could shorten the learning curve of young neurosurgeons and play an important role in both general surgery and neurosurgery. Georg Thieme Verlag KG Stuttgart · New York.

  16. Perceptual switch rates with ambiguous structure-from-motion figures in bipolar disorder.

    PubMed

    Krug, Kristine; Brunskill, Emma; Scarna, Antonina; Goodwin, Guy M; Parker, Andrew J

    2008-08-22

    Slowing of the rate at which a rivalrous percept switches from one configuration to another has been suggested as a potential trait marker for bipolar disorder. We measured perceptual alternations for a bistable, rotating, structure-from-motion cylinder in bipolar and control participants. In a control task, binocular depth rendered the direction of cylinder rotation unambiguous to monitor participants' performance and attention during the experimental task. A particular direction of rotation was perceptually stable, on average, for 33.5s in participants without psychiatric diagnosis. Euthymic, bipolar participants showed a slightly slower rate of switching between the two percepts (percept duration 42.3s). Under a parametric analysis of the best-fitting model for individual participants, this difference was statistically significant. However, the variability within groups was high, so this difference in average switch rates was not big enough to serve as a trait marker for bipolar disorder. We also found that low-level visual capacities, such as stereo threshold, influence perceptual switch rates. We suggest that there is no single brain location responsible for perceptual switching in all different ambiguous figures and that perceptual switching is generated by the actions of local cortical circuitry.

  17. Real-time tracking using stereo and motion: Visual perception for space robotics

    NASA Technical Reports Server (NTRS)

    Nishihara, H. Keith; Thomas, Hans; Huber, Eric; Reid, C. Ann

    1994-01-01

    The state-of-the-art in computing technology is rapidly attaining the performance necessary to implement many early vision algorithms at real-time rates. This new capability is helping to accelerate progress in vision research by improving our ability to evaluate the performance of algorithms in dynamic environments. In particular, we are becoming much more aware of the relative stability of various visual measurements in the presence of camera motion and system noise. This new processing speed is also allowing us to raise our sights toward accomplishing much higher-level processing tasks, such as figure-ground separation and active object tracking, in real-time. This paper describes a methodology for using early visual measurements to accomplish higher-level tasks; it then presents an overview of the high-speed accelerators developed at Teleos to support early visual measurements. The final section describes the successful deployment of a real-time vision system to provide visual perception for the Extravehicular Activity Helper/Retriever robotic system in tests aboard NASA's KC135 reduced gravity aircraft.

  18. Perception and understanding of intentions and actions: does gender matter?

    PubMed

    Pavlova, Marina

    2009-01-09

    Perception of intentions and dispositions of others through body motion, body language, gestures and actions is of immense importance for a variety of daily-life situations and adaptive social behavior. This ability is of particular value because of the potential discrepancy between verbal and non-verbal communication levels. Recent data shows that some aspects of visual social perception are gender dependent. The present study asks whether and, if so, how the ability for perception and understanding of others' intentions and actions depends on perceivers' gender. With this purpose in mind, a visual event arrangement (EA) task was administered to female and male participants of two groups, adolescents aged 13-16 years and young adults. The main outcome of the study shows no difference in performance on the EA task between female and male participants in both groups. The findings are discussed in terms of gender-related differences in behavioral components and brain mechanisms engaged in visual social perception.

  19. Path perception during rotation: influence of instructions, depth range, and dot density

    NASA Technical Reports Server (NTRS)

    Li, Li; Warren, William H Jr

    2004-01-01

    How do observers perceive their direction of self-motion when traveling on a straight path while their eyes are rotating? Our previous findings suggest that information from retinal flow and extra-retinal information about eye movements are each sufficient to solve this problem for both perception and active control of self-motion [Vision Res. 40 (2000) 3873; Psych. Sci. 13 (2002) 485]. In this paper, using displays depicting translation with simulated eye rotation, we investigated how task variables such as instructions, depth range, and dot density influenced the visual system's reliance on retinal vs. extra-retinal information for path perception during rotation. We found that path errors were small when observers expected to travel on a straight path or with neutral instructions, but errors increased markedly when observers expected to travel on a curved path. Increasing depth range or dot density did not improve path judgments. We conclude that the expectation of the shape of an upcoming path can influence the interpretation of the ambiguous retinal flow. A large depth range and dense motion parallax are not essential for accurate path perception during rotation, but reference objects and a large field of view appear to improve path judgments.

  20. Facilitating Effects of Emotion on the Perception of Biological Motion: Evidence for a Happiness Superiority Effect.

    PubMed

    Lee, Hannah; Kim, Jejoong

    2017-06-01

    It has been reported that visual perception can be influenced not only by the physical features of a stimulus but also by the emotional valence of the stimulus, even without explicit emotion recognition. Some previous studies reported an anger superiority effect while others found a happiness superiority effect during visual perception. It thus remains unclear as to which emotion is more influential. In the present study, we conducted two experiments using biological motion (BM) stimuli to examine whether emotional valence of the stimuli would affect BM perception; and if so, whether a specific type of emotion is associated with a superiority effect. Point-light walkers with three emotion types (anger, happiness, and neutral) were used, and the threshold to detect BM within noise was measured in Experiment 1. Participants showed higher performance in detecting happy walkers compared with the angry and neutral walkers. Follow-up motion velocity analysis revealed that physical difference among the stimuli was not the main factor causing the effect. The results of the emotion recognition task in Experiment 2 also showed a happiness superiority effect, as in Experiment 1. These results show that emotional valence (happiness) of the stimuli can facilitate the processing of BM.

  1. The Neural Correlates of Shoulder Apprehension: A Functional MRI Study

    PubMed Central

    Shitara, Hitoshi; Shimoyama, Daisuke; Sasaki, Tsuyoshi; Hamano, Noritaka; Ichinose, Tsuyoshi; Yamamoto, Atsushi; Kobayashi, Tsutomu; Osawa, Toshihisa; Iizuka, Haku; Hanakawa, Takashi; Tsushima, Yoshito; Takagishi, Kenji

    2015-01-01

    Although shoulder apprehension is an established clinical finding and is important for the prevention of shoulder dislocation, how this subjective perception is evoked remains unclear. We elucidated the functional neuroplasticity associated with apprehension in patients with recurrent anterior shoulder instability (RSI) using functional magnetic resonance imaging (fMRI). Twelve healthy volunteers and 14 patients with right-sided RSI performed a motor imagery task and a passive shoulder motion task. Brain activity was compared between healthy participants and those with RSI and was correlated with the apprehension intensity reported by participants after each task. Compared to healthy volunteers, participants with RSI exhibited decreased brain activity in the motor network, but increased activity in the hippocampus and amygdala. During the passive motion task, participants with RSI exhibited decreased activity in the left premotor and primary motor/somatosensory areas. Furthermore, brain activity was correlated with apprehension intensity in the left amygdala and left thalamus during the motor imagery task (memory-induced), while a correlation between apprehension intensity and brain activity was found in the left prefrontal cortex during the passive motion task (instability-induced). Our findings provide insight into the pathophysiology of RSI by identifying its associated neural alterations. We elucidated that shoulder apprehension was induced by two different factors, namely instability and memory. PMID:26351854

  2. M.I.T./Canadian vestibular experiments on the Spacelab-1 mission: 6. Vestibular reactions to lateral acceleration following ten days of weightlessness

    NASA Technical Reports Server (NTRS)

    Arrott, A. P.; Young, L. R.

    1986-01-01

    Tests of otolith function were performed pre-flight and post-flight on the science crew of the first Spacelab Mission with a rail-mounted linear acceleration sled. Four tests were performed using horizontal lateral (y-axis) acceleration: perception of linear motion, a closed loop nulling task, dynamic ocular torsion, and lateral eye deviations. The motion perception test measured the time to detect the onset and direction of near threshold accelerations. Post-flight measures of threshold and velocity constant obtained during the days immediately following the mission showed no consistent pattern of change among the four crewmen compared to their pre-flight baseline other than an increased variability of response. In the closed loop nulling task, crewmen controlled the motion of the sled and attempted to null a computer-generated random disturbance motion. When performed in the light, no difference in ability was noted between pre-flight and post-flight. In the dark, however, two of the four crewmen exhibited somewhat enhanced performance post-flight. Dynamic ocular torsion was measured in response to sinusoidal lateral acceleration which produces a gravitionertial stimulus equivalent to lateral head tilt without rotational movement of the head. Results available for two crewmen suggest a decreased amplitude of sinusoidal ocular torsion when measured on the day of landing (R+0) and an increasing amplitude when measured during the week following the mission.

  3. Ground-based training for the stimulus rearrangement encountered during spaceflight

    NASA Technical Reports Server (NTRS)

    Reschke, M. F.; Parker, D. E.; Harm, D. L.; Michaud, L.

    1988-01-01

    Approximately 65-70% of the crew members now experience motion sickness of some degree during the first 72 h of orbital flight on the Space Shuttle. Lack of congruence among signals from spatial orientation systems leads to sensory conflict, which appears to be the basic cause of space motion sickness. A project to develop training devices and procedures to preadapt astronauts to the stimulus rearrangements of microgravity is currently being pursued. The preflight adaptation trainers (PATs) are intended to: demonstrate sensory phenomena likely to be experienced in flight, allow astronauts to train preflight in an altered sensory environment, alter sensory-motor reflexes, and alleviate or shorten the duration of space motion sickness. Four part-task PATs are anticipated. The trainers are designed to evoke two adaptation processes, sensory compensation and sensory reinterpretation, which are necessary to maintain spatial orientation in a weightless environment. Recent investigations using one of the trainers indicate that self-motion perception of linear translation is enhanced when body tilt is combined with visual surround translation, and that a 270 degrees phase angle relationship between tilt and surround motion produces maximum translation perception.

  4. Peripheral Vision of Youths with Low Vision: Motion Perception, Crowding, and Visual Search

    PubMed Central

    Tadin, Duje; Nyquist, Jeffrey B.; Lusk, Kelly E.; Corn, Anne L.; Lappin, Joseph S.

    2012-01-01

    Purpose. Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. Methods. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10–17) and low vision (n = 24, ages 9–18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. Results. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Conclusions. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function. PMID:22836766

  5. Peripheral vision of youths with low vision: motion perception, crowding, and visual search.

    PubMed

    Tadin, Duje; Nyquist, Jeffrey B; Lusk, Kelly E; Corn, Anne L; Lappin, Joseph S

    2012-08-24

    Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10-17) and low vision (n = 24, ages 9-18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function.

  6. The development of global motion discrimination in school aged children

    PubMed Central

    Bogfjellmo, Lotte-Guri; Bex, Peter J.; Falkenberg, Helle K.

    2014-01-01

    Global motion perception matures during childhood and involves the detection of local directional signals that are integrated across space. We examine the maturation of local directional selectivity and global motion integration with an equivalent noise paradigm applied to direction discrimination. One hundred and three observers (6–17 years) identified the global direction of motion in a 2AFC task. The 8° central stimuli consisted of 100 dots of 10% Michelson contrast moving 2.8°/s or 9.8°/s. Local directional selectivity and global sampling efficiency were estimated from direction discrimination thresholds as a function of external directional noise, speed, and age. Direction discrimination thresholds improved gradually until the age of 14 years (linear regression, p < 0.05) for both speeds. This improvement was associated with a gradual increase in sampling efficiency (linear regression, p < 0.05), with no significant change in internal noise. Direction sensitivity was lower for dots moving at 2.8°/s than at 9.8°/s for all ages (paired t test, p < 0.05) and is mainly due to lower sampling efficiency. Global motion perception improves gradually during development and matures by age 14. There was no change in internal noise after the age of 6, suggesting that local direction selectivity is mature by that age. The improvement in global motion perception is underpinned by a steady increase in the efficiency with which direction signals are pooled, suggesting that global motion pooling processes mature for longer and later than local motion processing. PMID:24569985

  7. Altered transfer of visual motion information to parietal association cortex in untreated first-episode psychosis: Implications for pursuit eye tracking

    PubMed Central

    Lencer, Rebekka; Keedy, Sarah K.; Reilly, James L.; McDonough, Bruce E.; Harris, Margret S. H.; Sprenger, Andreas; Sweeney, John A.

    2011-01-01

    Visual motion processing and its use for pursuit eye movement control represent a valuable model for studying the use of sensory input for action planning. In psychotic disorders, alterations of visual motion perception have been suggested to cause pursuit eye tracking deficits. We evaluated this system in functional neuroimaging studies of untreated first-episode schizophrenia (N=24), psychotic bipolar disorder patients (N=13) and healthy controls (N=20). During a passive visual motion processing task, both patient groups showed reduced activation in the posterior parietal projection fields of motion-sensitive extrastriate area V5, but not in V5 itself. This suggests reduced bottom-up transfer of visual motion information from extrastriate cortex to perceptual systems in parietal association cortex. During active pursuit, activation was enhanced in anterior intraparietal sulcus and insula in both patient groups, and in dorsolateral prefrontal cortex and dorsomedial thalamus in schizophrenia patients. This may result from increased demands on sensorimotor systems for pursuit control due to the limited availability of perceptual motion information about target speed and tracking error. Visual motion information transfer deficits to higher -level association cortex may contribute to well-established pursuit tracking abnormalities, and perhaps to a wider array of alterations in perception and action planning in psychotic disorders. PMID:21873035

  8. Biological Motion Task Performance Predicts Superior Temporal Sulcus Activity

    ERIC Educational Resources Information Center

    Herrington, John D.; Nymberg, Charlotte; Schultz, Robert T.

    2011-01-01

    Numerous studies implicate superior temporal sulcus (STS) in the perception of human movement. More recent theories hold that STS is also involved in the "understanding" of human movement. However, almost no studies to date have associated STS function with observable variability in action understanding. The present study directly associated STS…

  9. Both physical exercise and progressive muscle relaxation reduce the facing-the-viewer bias in biological motion perception.

    PubMed

    Heenan, Adam; Troje, Nikolaus F

    2014-01-01

    Biological motion stimuli, such as orthographically projected stick figure walkers, are ambiguous about their orientation in depth. The projection of a stick figure walker oriented towards the viewer, therefore, is the same as its projection when oriented away. Even though such figures are depth-ambiguous, however, observers tend to interpret them as facing towards them more often than facing away. Some have speculated that this facing-the-viewer bias may exist for sociobiological reasons: Mistaking another human as retreating when they are actually approaching could have more severe consequences than the opposite error. Implied in this hypothesis is that the facing-towards percept of biological motion stimuli is potentially more threatening. Measures of anxiety and the facing-the-viewer bias should therefore be related, as researchers have consistently found that anxious individuals display an attentional bias towards more threatening stimuli. The goal of this study was to assess whether physical exercise (Experiment 1) or an anxiety induction/reduction task (Experiment 2) would significantly affect facing-the-viewer biases. We hypothesized that both physical exercise and progressive muscle relaxation would decrease facing-the-viewer biases for full stick figure walkers, but not for bottom- or top-half-only human stimuli, as these carry less sociobiological relevance. On the other hand, we expected that the anxiety induction task (Experiment 2) would increase facing-the-viewer biases for full stick figure walkers only. In both experiments, participants completed anxiety questionnaires, exercised on a treadmill (Experiment 1) or performed an anxiety induction/reduction task (Experiment 2), and then immediately completed a perceptual task that allowed us to assess their facing-the-viewer bias. As hypothesized, we found that physical exercise and progressive muscle relaxation reduced facing-the-viewer biases for full stick figure walkers only. Our results provide further support that the facing-the-viewer bias for biological motion stimuli is related to the sociobiological relevance of such stimuli.

  10. Both Physical Exercise and Progressive Muscle Relaxation Reduce the Facing-the-Viewer Bias in Biological Motion Perception

    PubMed Central

    Heenan, Adam; Troje, Nikolaus F.

    2014-01-01

    Biological motion stimuli, such as orthographically projected stick figure walkers, are ambiguous about their orientation in depth. The projection of a stick figure walker oriented towards the viewer, therefore, is the same as its projection when oriented away. Even though such figures are depth-ambiguous, however, observers tend to interpret them as facing towards them more often than facing away. Some have speculated that this facing-the-viewer bias may exist for sociobiological reasons: Mistaking another human as retreating when they are actually approaching could have more severe consequences than the opposite error. Implied in this hypothesis is that the facing-towards percept of biological motion stimuli is potentially more threatening. Measures of anxiety and the facing-the-viewer bias should therefore be related, as researchers have consistently found that anxious individuals display an attentional bias towards more threatening stimuli. The goal of this study was to assess whether physical exercise (Experiment 1) or an anxiety induction/reduction task (Experiment 2) would significantly affect facing-the-viewer biases. We hypothesized that both physical exercise and progressive muscle relaxation would decrease facing-the-viewer biases for full stick figure walkers, but not for bottom- or top-half-only human stimuli, as these carry less sociobiological relevance. On the other hand, we expected that the anxiety induction task (Experiment 2) would increase facing-the-viewer biases for full stick figure walkers only. In both experiments, participants completed anxiety questionnaires, exercised on a treadmill (Experiment 1) or performed an anxiety induction/reduction task (Experiment 2), and then immediately completed a perceptual task that allowed us to assess their facing-the-viewer bias. As hypothesized, we found that physical exercise and progressive muscle relaxation reduced facing-the-viewer biases for full stick figure walkers only. Our results provide further support that the facing-the-viewer bias for biological motion stimuli is related to the sociobiological relevance of such stimuli. PMID:24987956

  11. Enhanced and diminished visuo-spatial information processing in autism depends on stimulus complexity.

    PubMed

    Bertone, Armando; Mottron, Laurent; Jelenic, Patricia; Faubert, Jocelyn

    2005-10-01

    Visuo-perceptual processing in autism is characterized by intact or enhanced performance on static spatial tasks and inferior performance on dynamic tasks, suggesting a deficit of dorsal visual stream processing in autism. However, previous findings by Bertone et al. indicate that neuro-integrative mechanisms used to detect complex motion, rather than motion perception per se, may be impaired in autism. We present here the first demonstration of concurrent enhanced and decreased performance in autism on the same visuo-spatial static task, wherein the only factor dichotomizing performance was the neural complexity required to discriminate grating orientation. The ability of persons with autism was found to be superior for identifying the orientation of simple, luminance-defined (or first-order) gratings but inferior for complex, texture-defined (or second-order) gratings. Using a flicker contrast sensitivity task, we demonstrated that this finding is probably not due to abnormal information processing at a sub-cortical level (magnocellular and parvocellular functioning). Together, these findings are interpreted as a clear indication of altered low-level perceptual information processing in autism, and confirm that the deficits and assets observed in autistic visual perception are contingent on the complexity of the neural network required to process a given type of visual stimulus. We suggest that atypical neural connectivity, resulting in enhanced lateral inhibition, may account for both enhanced and decreased low-level information processing in autism.

  12. Biological motion perception links diverse facets of theory of mind during middle childhood.

    PubMed

    Rice, Katherine; Anderson, Laura C; Velnoskey, Kayla; Thompson, James C; Redcay, Elizabeth

    2016-06-01

    Two cornerstones of social development--social perception and theory of mind--undergo brain and behavioral changes during middle childhood, but the link between these developing domains is unclear. One theoretical perspective argues that these skills represent domain-specific areas of social development, whereas other perspectives suggest that both skills may reflect a more integrated social system. Given recent evidence from adults that these superficially different domains may be related, the current study examined the developmental relation between these social processes in 52 children aged 7 to 12 years. Controlling for age and IQ, social perception (perception of biological motion in noise) was significantly correlated with two measures of theory of mind: one in which children made mental state inferences based on photographs of the eye region of the face and another in which children made mental state inferences based on stories. Social perception, however, was not correlated with children's ability to make physical inferences from stories about people. Furthermore, the mental state inference tasks were not correlated with each other, suggesting a role for social perception in linking various facets of theory of mind. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Reprint of "Biological motion perception links diverse facets of theory of mind during middle childhood".

    PubMed

    Rice, Katherine; Anderson, Laura C; Velnoskey, Kayla; Thompson, James C; Redcay, Elizabeth

    2016-09-01

    Two cornerstones of social development-social perception and theory of mind-undergo brain and behavioral changes during middle childhood, but the link between these developing domains is unclear. One theoretical perspective argues that these skills represent domain-specific areas of social development, whereas other perspectives suggest that both skills may reflect a more integrated social system. Given recent evidence from adults that these superficially different domains may be related, the current study examined the developmental relation between these social processes in 52 children aged 7 to 12years. Controlling for age and IQ, social perception (perception of biological motion in noise) was significantly correlated with two measures of theory of mind: one in which children made mental state inferences based on photographs of the eye region of the face and another in which children made mental state inferences based on stories. Social perception, however, was not correlated with children's ability to make physical inferences from stories about people. Furthermore, the mental state inference tasks were not correlated with each other, suggesting a role for social perception in linking various facets of theory of mind. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Stochastic correlative firing for figure-ground segregation.

    PubMed

    Chen, Zhe

    2005-03-01

    Segregation of sensory inputs into separate objects is a central aspect of perception and arises in all sensory modalities. The figure-ground segregation problem requires identifying an object of interest in a complex scene, in many cases given binaural auditory or binocular visual observations. The computations required for visual and auditory figure-ground segregation share many common features and can be cast within a unified framework. Sensory perception can be viewed as a problem of optimizing information transmission. Here we suggest a stochastic correlative firing mechanism and an associative learning rule for figure-ground segregation in several classic sensory perception tasks, including the cocktail party problem in binaural hearing, binocular fusion of stereo images, and Gestalt grouping in motion perception.

  15. Depth perception from moving cast shadow in macaque monkey.

    PubMed

    Mizutani, Saneyuki; Usui, Nobuo; Yokota, Takanori; Mizusawa, Hidehiro; Taira, Masato; Katsuyama, Narumi

    2015-07-15

    In the present study, we investigate whether the macaque monkey can perceive motion in depth using a moving cast shadow. To accomplish this, we conducted two experiments. In the first experiment, an adult Japanese monkey was trained in a motion discrimination task in depth by binocular disparity. A square was presented on the display so that it appeared with a binocular disparity of 0.12 degrees (initial position), and moved toward (approaching) or away from (receding) the monkey for 1s. The monkey was trained to discriminate the approaching and receding motion of the square by GO/delayed GO-type responses. The monkey showed a significantly high accuracy rate in the task, and the performance was maintained when the position, color, and shape of the moving object were changed. In the next experiment, the change in the disparity was gradually decreased in the motion discrimination task. The results showed that the performance of the monkey declined as the distance of the approaching and receding motion of the square decreased from the initial position. However, when a moving cast shadow was added to the stimulus, the monkey responded to the motion in depth induced by the cast shadow in the same way as by binocular disparity; the reward was delivered randomly or given in all trials to prevent the learning of the 2D motion of the shadow in the frontal plane. These results suggest that the macaque monkey can perceive motion in depth using a moving cast shadow as well as using binocular disparity. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Grasp posture alters visual processing biases near the hands

    PubMed Central

    Thomas, Laura E.

    2015-01-01

    Observers experience biases in visual processing for objects within easy reach of their hands that may assist them in evaluating items that are candidates for action. I investigated the hypothesis that hand postures affording different types of actions differentially bias vision. Across three experiments, participants performed global motion detection and global form perception tasks while their hands were positioned a) near the display in a posture affording a power grasp, b) near the display in a posture affording a precision grasp, or c) in their laps. Although the power grasp posture facilitated performance on the motion task, the precision grasp posture instead facilitated performance on the form task. These results suggest that the visual system weights processing based on an observer’s current affordances for specific actions: fast and forceful power grasps enhance temporal sensitivity, while detail-oriented precision grasps enhance spatial sensitivity. PMID:25862545

  17. The impact of the perception of rhythmic music on self-paced oscillatory movements

    PubMed Central

    Peckel, Mathieu; Pozzo, Thierry; Bigand, Emmanuel

    2014-01-01

    Inspired by theories of perception-action coupling and embodied music cognition, we investigated how rhythmic music perception impacts self-paced oscillatory movements. In a pilot study, we examined the kinematic parameters of self-paced oscillatory movements, walking and finger tapping using optical motion capture. In accordance with biomechanical constraints accounts of motion, we found that movements followed a hierarchical organization depending on the proximal/distal characteristic of the limb used. Based on these findings, we were interested in knowing how and when the perception of rhythmic music could resonate with the motor system in the context of these constrained oscillatory movements. In order to test this, we conducted an experiment where participants performed four different effector-specific movements (lower leg, whole arm and forearm oscillation and finger tapping) while rhythmic music was playing in the background. Musical stimuli consisted of computer-generated MIDI musical pieces with a 4/4 metrical structure. The musical tempo of each song increased from 60 BPM to 120 BPM by 6 BPM increments. A specific tempo was maintained for 20 s before a 2 s transition to the higher tempo. The task of the participant was to maintain a comfortable pace for the four movements (self-paced) while not paying attention to the music. No instruction on whether to synchronize with the music was given. Results showed that participants were distinctively influenced by the background music depending on the movement used with the tapping task being consistently the most influenced. Furthermore, eight strategies put in place by participants to cope with the task were unveiled. Despite not instructed to do so, participants also occasionally synchronized with music. Results are discussed in terms of the link between perception and action (i.e., motor/perceptual resonance). In general, our results give support to the notion that rhythmic music is processed in a motoric fashion. PMID:25278924

  18. Efference Copy Failure during Smooth Pursuit Eye Movements in Schizophrenia

    PubMed Central

    Dias, Elisa C.; Sanchez, Jamie L.; Schütz, Alexander C.; Javitt, Daniel C.

    2013-01-01

    Abnormal smooth pursuit eye movements in patients with schizophrenia are often considered a consequence of impaired motion perception. Here we used a novel motion prediction task to assess the effects of abnormal pursuit on perception in human patients. Schizophrenia patients (n = 15) and healthy controls (n = 16) judged whether a briefly presented moving target (“ball”) would hit/miss a stationary vertical line segment (“goal”). To relate prediction performance and pursuit directly, we manipulated eye movements: in half of the trials, observers smoothly tracked the ball; in the other half, they fixated on the goal. Strict quality criteria ensured that pursuit was initiated and that fixation was maintained. Controls were significantly better in trajectory prediction during pursuit than during fixation, their performance increased with presentation duration, and their pursuit gain and perceptual judgments were correlated. Such perceptual benefits during pursuit may be due to the use of extraretinal motion information estimated from an efference copy signal. With an overall lower performance in pursuit and perception, patients showed no such pursuit advantage and no correlation between pursuit gain and perception. Although patients' pursuit showed normal improvement with longer duration, their prediction performance failed to benefit from duration increases. This dissociation indicates relatively intact early visual motion processing, but a failure to use efference copy information. Impaired efference function in the sensory system may represent a general deficit in schizophrenia and thus contribute to symptoms and functional outcome impairments associated with the disorder. PMID:23864667

  19. Efference copy failure during smooth pursuit eye movements in schizophrenia.

    PubMed

    Spering, Miriam; Dias, Elisa C; Sanchez, Jamie L; Schütz, Alexander C; Javitt, Daniel C

    2013-07-17

    Abnormal smooth pursuit eye movements in patients with schizophrenia are often considered a consequence of impaired motion perception. Here we used a novel motion prediction task to assess the effects of abnormal pursuit on perception in human patients. Schizophrenia patients (n = 15) and healthy controls (n = 16) judged whether a briefly presented moving target ("ball") would hit/miss a stationary vertical line segment ("goal"). To relate prediction performance and pursuit directly, we manipulated eye movements: in half of the trials, observers smoothly tracked the ball; in the other half, they fixated on the goal. Strict quality criteria ensured that pursuit was initiated and that fixation was maintained. Controls were significantly better in trajectory prediction during pursuit than during fixation, their performance increased with presentation duration, and their pursuit gain and perceptual judgments were correlated. Such perceptual benefits during pursuit may be due to the use of extraretinal motion information estimated from an efference copy signal. With an overall lower performance in pursuit and perception, patients showed no such pursuit advantage and no correlation between pursuit gain and perception. Although patients' pursuit showed normal improvement with longer duration, their prediction performance failed to benefit from duration increases. This dissociation indicates relatively intact early visual motion processing, but a failure to use efference copy information. Impaired efference function in the sensory system may represent a general deficit in schizophrenia and thus contribute to symptoms and functional outcome impairments associated with the disorder.

  20. Perception of Stand-on-ability: Do Geographical Slants Feel Steeper Than They Look?

    PubMed

    Hajnal, Alen; Wagman, Jeffrey B; Doyon, Jonathan K; Clark, Joseph D

    2016-07-01

    Past research has shown that haptically perceived surface slant by foot is matched with visually perceived slant by a factor of 0.81. Slopes perceived visually appear shallower than when stood on without looking. We sought to identify the sources of this discrepancy by asking participants to judge whether they would be able to stand on an inclined ramp. In the first experiment, visual perception was compared to pedal perception in which participants took half a step with one foot onto an occluded ramp. Visual perception closely matched the actual maximal slope angle that one could stand on, whereas pedal perception underestimated it. Participants may have been less stable in the pedal condition while taking half a step onto the ramp. We controlled for this by having participants hold onto a sturdy tripod in the pedal condition (Experiment 2). This did not eliminate the difference between visual and haptic perception, but repeating the task while sitting on a chair did (Experiment 3). Beyond balance requirements, pedal perception may also be constrained by the limited range of motion at the ankle and knee joints while standing. Indeed, when we restricted range of motion by wearing an ankle brace pedal perception underestimated the affordance (Experiment 4). Implications for ecological theory were offered by discussing the notion of functional equivalence and the role of exploration in perception. © The Author(s) 2016.

  1. Clonal selection versus clonal cooperation: the integrated perception of immune objects

    PubMed Central

    Nataf, Serge

    2016-01-01

    Analogies between the immune and nervous systems were first envisioned by the immunologist Niels Jerne who introduced the concepts of antigen "recognition" and immune "memory". However, since then, it appears that only the cognitive immunology paradigm proposed by Irun Cohen, attempted to further theorize the immune system functions through the prism of neurosciences. The present paper is aimed at revisiting this analogy-based reasoning. In particular, a parallel is drawn between the brain pathways of visual perception and the processes allowing the global perception of an "immune object". Thus, in the visual system, distinct features of a visual object (shape, color, motion) are perceived separately by distinct neuronal populations during a primary perception task. The output signals generated during this first step instruct then an integrated perception task performed by other neuronal networks. Such a higher order perception step is by essence a cooperative task that is mandatory for the global perception of visual objects. Based on a re-interpretation of recent experimental data, it is suggested that similar general principles drive the integrated perception of immune objects in secondary lymphoid organs (SLOs). In this scheme, the four main categories of signals characterizing an immune object (antigenic, contextual, temporal and localization signals) are first perceived separately by distinct networks of immunocompetent cells.  Then, in a multitude of SLO niches, the output signals generated during this primary perception step are integrated by TH-cells at the single cell level. This process eventually generates a multitude of T-cell and B-cell clones that perform, at the scale of SLOs, an integrated perception of immune objects. Overall, this new framework proposes that integrated immune perception and, consequently, integrated immune responses, rely essentially on clonal cooperation rather than clonal selection. PMID:27830060

  2. Perception of linear acceleration in weightlessness

    NASA Technical Reports Server (NTRS)

    Arrott, Anthony P.; Young, Laurence R.; Merfeld, Daniel M.

    1991-01-01

    Tests of the perception and use of linear acceleration sensory information were performed on the science crews of the Spacelab 1 (SL-1) and D-1 missions using linear 'sleds' in-flight (D-1) and pre-post flight. The time delay between the acceleration step stimulus and the subjective response was consistently reduced during weightlessness, but was neither statistically significant nor of functional importance. Increased variability of responses when going from one environment to the other was apparent from measurements on the first day of the mission and in the first days post-flight. Subjective reports of perceived motion during sinusoidal oscillation in weightlessness were qualitatively similar to reports on earth. In a closed-loop motion nulling task, enhanced performance was observed post-flight in all crewmembers tested in the Y or Z axes.

  3. Perception of linear acceleration in weightlessness

    NASA Technical Reports Server (NTRS)

    Arrott, A. P.; Young, L. R.; Merfeld, D. M.

    1990-01-01

    Tests of the perception and use of linear acceleration sensory information were performed on the science crews of the Spacelab 1 (SL-1) and D-1 missions using linear "sleds" in-flight (D-1) and pre-post flight. The time delay between the acceleration step stimulus and the subjective response was consistently reduced during weightlessness, but was neither statistically significant nor of functional importance. Increased variability of responses when going from one environment to the other was apparent from measurements on the first day of the mission and in the first days post-flight. Subjective reports of perceived motion during sinusoidal oscillation in weightlessness were qualitatively similar to reports on earth. In a closed-loop motion nulling task, enhanced performance was observed post-flight in all crewmembers tested in the Y or Z axes.

  4. Motion-based nearest vector metric for reference frame selection in the perception of motion.

    PubMed

    Agaoglu, Mehmet N; Clarke, Aaron M; Herzog, Michael H; Ögmen, Haluk

    2016-05-01

    We investigated how the visual system selects a reference frame for the perception of motion. Two concentric arcs underwent circular motion around the center of the display, where observers fixated. The outer (target) arc's angular velocity profile was modulated by a sine wave midflight whereas the inner (reference) arc moved at a constant angular speed. The task was to report whether the target reversed its direction of motion at any point during its motion. We investigated the effects of spatial and figural factors by systematically varying the radial and angular distances between the arcs, and their relative sizes. We found that the effectiveness of the reference frame decreases with increasing radial- and angular-distance measures. Drastic changes in the relative sizes of the arcs did not influence motion reversal thresholds, suggesting no influence of stimulus form on perceived motion. We also investigated the effect of common velocity by introducing velocity fluctuations to the reference arc as well. We found no effect of whether or not a reference frame has a constant motion. We examined several form- and motion-based metrics, which could potentially unify our findings. We found that a motion-based nearest vector metric can fully account for all the data reported here. These findings suggest that the selection of reference frames for motion processing does not result from a winner-take-all process, but instead, can be explained by a field whose strength decreases with the distance between the nearest motion vectors regardless of the form of the moving objects.

  5. Implied motion because of instability in Hokusai Manga activates the human motion-sensitive extrastriate visual cortex: an fMRI study of the impact of visual art.

    PubMed

    Osaka, Naoyuki; Matsuyoshi, Daisuke; Ikeda, Takashi; Osaka, Mariko

    2010-03-10

    The recent development of cognitive neuroscience has invited inference about the neurosensory events underlying the experience of visual arts involving implied motion. We report functional magnetic resonance imaging study demonstrating activation of the human extrastriate motion-sensitive cortex by static images showing implied motion because of instability. We used static line-drawing cartoons of humans by Hokusai Katsushika (called 'Hokusai Manga'), an outstanding Japanese cartoonist as well as famous Ukiyoe artist. We found 'Hokusai Manga' with implied motion by depicting human bodies that are engaged in challenging tonic posture significantly activated the motion-sensitive visual cortex including MT+ in the human extrastriate cortex, while an illustration that does not imply motion, for either humans or objects, did not activate these areas under the same tasks. We conclude that motion-sensitive extrastriate cortex would be a critical region for perception of implied motion in instability.

  6. Conveying Movement in Music and Prosody

    PubMed Central

    Hedger, Stephen C.; Nusbaum, Howard C.; Hoeckner, Berthold

    2013-01-01

    We investigated whether acoustic variation of musical properties can analogically convey descriptive information about an object. Specifically, we tested whether information from the temporal structure in music interacts with perception of a visual image to form an analog perceptual representation as a natural part of music perception. In Experiment 1, listeners heard music with an accelerating or decelerating temporal pattern, and then saw a picture of a still or moving object and decided whether it was animate or inanimate – a task unrelated to the patterning of the music. Object classification was faster when musical motion matched visually depicted motion. In Experiment 2, participants heard spoken sentences that were accompanied by accelerating or decelerating music, and then were presented with a picture of a still or moving object. When motion information in the music matched motion information in the picture, participants were similarly faster to respond. Fast and slow temporal patterns without acceleration and deceleration, however, did not make participants faster when they saw a picture depicting congruent motion information (Experiment 3), suggesting that understanding temporal structure information in music may depend on specific metaphors about motion in music. Taken together, these results suggest that visuo-spatial referential information can be analogically conveyed and represented by music and can be integrated with speech or influence the understanding of speech. PMID:24146920

  7. Dynamics of the functional link between area MT LFPs and motion detection

    PubMed Central

    Smith, Jackson E. T.; Beliveau, Vincent; Schoen, Alan; Remz, Jordana; Zhan, Chang'an A.

    2015-01-01

    The evolution of a visually guided perceptual decision results from multiple neural processes, and recent work suggests that signals with different neural origins are reflected in separate frequency bands of the cortical local field potential (LFP). Spike activity and LFPs in the middle temporal area (MT) have a functional link with the perception of motion stimuli (referred to as neural-behavioral correlation). To cast light on the different neural origins that underlie this functional link, we compared the temporal dynamics of the neural-behavioral correlations of MT spikes and LFPs. Wide-band activity was simultaneously recorded from two locations of MT from monkeys performing a threshold, two-stimuli, motion pulse detection task. Shortly after the motion pulse occurred, we found that high-gamma (100–200 Hz) LFPs had a fast, positive correlation with detection performance that was similar to that of the spike response. Beta (10–30 Hz) LFPs were negatively correlated with detection performance, but their dynamics were much slower, peaked late, and did not depend on stimulus configuration or reaction time. A late change in the correlation of all LFPs across the two recording electrodes suggests that a common input arrived at both MT locations prior to the behavioral response. Our results support a framework in which early high-gamma LFPs likely reflected fast, bottom-up, sensory processing that was causally linked to perception of the motion pulse. In comparison, late-arriving beta and high-gamma LFPs likely reflected slower, top-down, sources of neural-behavioral correlation that originated after the perception of the motion pulse. PMID:25948867

  8. On the road to somewhere: Brain potentials reflect language effects on motion event perception.

    PubMed

    Flecken, Monique; Athanasopoulos, Panos; Kuipers, Jan Rouke; Thierry, Guillaume

    2015-08-01

    Recent studies have identified neural correlates of language effects on perception in static domains of experience such as colour and objects. The generalization of such effects to dynamic domains like motion events remains elusive. Here, we focus on grammatical differences between languages relevant for the description of motion events and their impact on visual scene perception. Two groups of native speakers of German or English were presented with animated videos featuring a dot travelling along a trajectory towards a geometrical shape (endpoint). English is a language with grammatical aspect in which attention is drawn to trajectory and endpoint of motion events equally. German, in contrast, is a non-aspect language which highlights endpoints. We tested the comparative perceptual saliency of trajectory and endpoint of motion events by presenting motion event animations (primes) followed by a picture symbolising the event (target): In 75% of trials, the animation was followed by a mismatching picture (both trajectory and endpoint were different); in 10% of trials, only the trajectory depicted in the picture matched the prime; in 10% of trials, only the endpoint matched the prime; and in 5% of trials both trajectory and endpoint were matching, which was the condition requiring a response from the participant. In Experiment 1 we recorded event-related brain potentials elicited by the picture in native speakers of German and native speakers of English. German participants exhibited a larger P3 wave in the endpoint match than the trajectory match condition, whereas English speakers showed no P3 amplitude difference between conditions. In Experiment 2 participants performed a behavioural motion matching task using the same stimuli as those used in Experiment 1. German and English participants did not differ in response times showing that motion event verbalisation cannot readily account for the difference in P3 amplitude found in the first experiment. We argue that, even in a non-verbal context, the grammatical properties of the native language and associated sentence-level patterns of event encoding influence motion event perception, such that attention is automatically drawn towards aspects highlighted by the grammar. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  9. The effect of age upon the perception of 3-D shape from motion.

    PubMed

    Norman, J Farley; Cheeseman, Jacob R; Pyles, Jessica; Baxter, Michael W; Thomason, Kelsey E; Calloway, Autum B

    2013-12-18

    Two experiments evaluated the ability of 50 older, middle-aged, and younger adults to discriminate the 3-dimensional (3-D) shape of curved surfaces defined by optical motion. In Experiment 1, temporal correspondence was disrupted by limiting the lifetimes of the moving surface points. In order to discriminate 3-D surface shape reliably, the younger and middle-aged adults needed a surface point lifetime of approximately 4 views (in the apparent motion sequences). In contrast, the older adults needed a much longer surface point lifetime of approximately 9 views in order to reliably perform the same task. In Experiment 2, the negative effect of age upon 3-D shape discrimination from motion was replicated. In this experiment, however, the participants' abilities to discriminate grating orientation and speed were also assessed. Edden et al. (2009) have recently demonstrated that behavioral grating orientation discrimination correlates with GABA (gamma aminobutyric acid) concentration in human visual cortex. Our results demonstrate that the negative effect of age upon 3-D shape perception from motion is not caused by impairments in the ability to perceive motion per se, but does correlate significantly with grating orientation discrimination. This result suggests that the age-related decline in 3-D shape discrimination from motion is related to decline in GABA concentration in visual cortex. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Exposure to Organic Solvents Used in Dry Cleaning Reduces Low and High Level Visual Function

    PubMed Central

    Jiménez Barbosa, Ingrid Astrid

    2015-01-01

    Purpose To investigate whether exposure to occupational levels of organic solvents in the dry cleaning industry is associated with neurotoxic symptoms and visual deficits in the perception of basic visual features such as luminance contrast and colour, higher level processing of global motion and form (Experiment 1), and cognitive function as measured in a visual search task (Experiment 2). Methods The Q16 neurotoxic questionnaire, a commonly used measure of neurotoxicity (by the World Health Organization), was administered to assess the neurotoxic status of a group of 33 dry cleaners exposed to occupational levels of organic solvents (OS) and 35 age-matched non dry-cleaners who had never worked in the dry cleaning industry. In Experiment 1, to assess visual function, contrast sensitivity, colour/hue discrimination (Munsell Hue 100 test), global motion and form thresholds were assessed using computerised psychophysical tests. Sensitivity to global motion or form structure was quantified by varying the pattern coherence of global dot motion (GDM) and Glass pattern (oriented dot pairs) respectively (i.e., the percentage of dots/dot pairs that contribute to the perception of global structure). In Experiment 2, a letter visual-search task was used to measure reaction times (as a function of the number of elements: 4, 8, 16, 32, 64 and 100) in both parallel and serial search conditions. Results Dry cleaners exposed to organic solvents had significantly higher scores on the Q16 compared to non dry-cleaners indicating that dry cleaners experienced more neurotoxic symptoms on average. The contrast sensitivity function for dry cleaners was significantly lower at all spatial frequencies relative to non dry-cleaners, which is consistent with previous studies. Poorer colour discrimination performance was also noted in dry cleaners than non dry-cleaners, particularly along the blue/yellow axis. In a new finding, we report that global form and motion thresholds for dry cleaners were also significantly higher and almost double than that obtained from non dry-cleaners. However, reaction time performance on both parallel and serial visual search was not different between dry cleaners and non dry-cleaners. Conclusions Exposure to occupational levels of organic solvents is associated with neurotoxicity which is in turn associated with both low level deficits (such as the perception of contrast and discrimination of colour) and high level visual deficits such as the perception of global form and motion, but not visual search performance. The latter finding indicates that the deficits in visual function are unlikely to be due to changes in general cognitive performance. PMID:25933026

  11. Manual control of yaw motion with combined visual and vestibular cues

    NASA Technical Reports Server (NTRS)

    Zacharias, G. L.; Young, L. R.

    1977-01-01

    Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation was modelled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A correction to the frequency responses is provided by a separate measurement of manual control performance in an analogous visual pursuit nulling task. The resulting dual-input describing function for motion perception dependence on combined cue presentation supports the complementary model, in which vestibular cues dominate sensation at frequencies above 0.05 Hz. The describing function model is extended by the proposal of a non-linear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.

  12. Effect of eye position during human visual-vestibular integration of heading perception.

    PubMed

    Crane, Benjamin T

    2017-09-01

    Visual and inertial stimuli provide heading discrimination cues. Integration of these multisensory stimuli has been demonstrated to depend on their relative reliability. However, the reference frame of visual stimuli is eye centered while inertia is head centered, and it remains unclear how these are reconciled with combined stimuli. Seven human subjects completed a heading discrimination task consisting of a 2-s translation with a peak velocity of 16 cm/s. Eye position was varied between 0° and ±25° left/right. Experiments were done with inertial motion, visual motion, or a combined visual-inertial motion. Visual motion coherence varied between 35% and 100%. Subjects reported whether their perceived heading was left or right of the midline in a forced-choice task. With the inertial stimulus the eye position had an effect such that the point of subjective equality (PSE) shifted 4.6 ± 2.4° in the gaze direction. With the visual stimulus the PSE shift was 10.2 ± 2.2° opposite the gaze direction, consistent with retinotopic coordinates. Thus with eccentric eye positions the perceived inertial and visual headings were offset ~15°. During the visual-inertial conditions the PSE varied consistently with the relative reliability of these stimuli such that at low visual coherence the PSE was similar to that of the inertial stimulus and at high coherence it was closer to the visual stimulus. On average, the inertial stimulus was weighted near Bayesian ideal predictions, but there was significant deviation from ideal in individual subjects. These findings support visual and inertial cue integration occurring in independent coordinate systems. NEW & NOTEWORTHY In multiple cortical areas visual heading is represented in retinotopic coordinates while inertial heading is in body coordinates. It remains unclear whether multisensory integration occurs in a common coordinate system. The experiments address this using a multisensory integration task with eccentric gaze positions making the effect of coordinate systems clear. The results indicate that the coordinate systems remain separate to the perceptual level and that during the multisensory task the perception depends on relative stimulus reliability. Copyright © 2017 the American Physiological Society.

  13. Effects of Spatio-Temporal Aliasing on Out-the-Window Visual Systems

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara T.; Stone, Leland S.; Liston, Dorion B.; Hebert, Tim M.

    2014-01-01

    Designers of out-the-window visual systems face a challenge when attempting to simulate the outside world as viewed from a cockpit. Many methodologies have been developed and adopted to aid in the depiction of particular scene features, or levels of static image detail. However, because aircraft move, it is necessary to also consider the quality of the motion in the simulated visual scene. When motion is introduced in the simulated visual scene, perceptual artifacts can become apparent. A particular artifact related to image motion, spatiotemporal aliasing, will be addressed. The causes of spatio-temporal aliasing will be discussed, and current knowledge regarding the impact of these artifacts on both motion perception and simulator task performance will be reviewed. Methods of reducing the impact of this artifact are also addressed

  14. Global models: Robot sensing, control, and sensory-motor skills

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S.

    1989-01-01

    Robotics research has begun to address the modeling and implementation of a wide variety of unstructured tasks. Examples include automated navigation, platform servicing, custom fabrication and repair, deployment and recovery, and science exploration. Such tasks are poorly described at onset; the workspace layout is partially unfamiliar, and the task control sequence is only qualitatively characterized. The robot must model the workspace, plan detailed physical actions from qualitative goals, and adapt its instantaneous control regimes to unpredicted events. Developing robust representations and computational approaches for these sensing, planning, and control functions is a major challenge. The underlying domain constraints are very general, and seem to offer little guidance for well-bounded approximation of object shape and motion, manipulation postures and trajectories, and the like. This generalized modeling problem is discussed, with an emphasis on the role of sensing. It is also discussed that unstructured tasks often have, in fact, a high degree of underlying physical symmetry, and such implicit knowledge should be drawn on to model task performance strategies in a methodological fashion. A group-theoretic decomposition of the workspace organization, task goals, and their admissible interactions are proposed. This group-mechanical approach to task representation helps to clarify the functional interplay of perception and control, in essence, describing what perception is specifically for, versus how it is generically modeled. One also gains insight how perception might logically evolve in response to needs of more complex motor skills. It is discussed why, of the many solutions that are often mathematically admissible to a given sensory motor-coordination problem, one may be preferred over others.

  15. Causal capture effects in chimpanzees (Pan troglodytes).

    PubMed

    Matsuno, Toyomi; Tomonaga, Masaki

    2017-01-01

    Extracting a cause-and-effect structure from the physical world is an important demand for animals living in dynamically changing environments. Human perceptual and cognitive mechanisms are known to be sensitive and tuned to detect and interpret such causal structures. In contrast to rigorous investigations of human causal perception, the phylogenetic roots of this perception are not well understood. In the present study, we aimed to investigate the susceptibility of nonhuman animals to mechanical causality by testing whether chimpanzees perceived an illusion called causal capture (Scholl & Nakayama, 2002). Causal capture is a phenomenon in which a type of bistable visual motion of objects is perceived as causal collision due to a bias from a co-occurring causal event. In our experiments, we assessed the susceptibility of perception of a bistable stream/bounce motion event to a co-occurring causal event in chimpanzees. The results show that, similar to in humans, causal "bounce" percepts were significantly increased in chimpanzees with the addition of a task-irrelevant causal bounce event that was synchronously presented. These outcomes suggest that the perceptual mechanisms behind the visual interpretation of causal structures in the environment are evolutionarily shared between human and nonhuman animals. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Atypical basic movement kinematics in autism spectrum conditions

    PubMed Central

    Blakemore, Sarah-Jayne; Press, Clare

    2013-01-01

    Individuals with autism spectrum conditions have difficulties in understanding and responding appropriately to others. Additionally, they demonstrate impaired perception of biological motion and problems with motor control. Here we investigated whether individuals with autism move with an atypical kinematic profile, which might help to explain perceptual and motor impairments, and in principle may contribute to some of their higher level social problems. We recorded trajectory, velocity, acceleration and jerk while adult participants with autism and a matched control group conducted horizontal sinusoidal arm movements. Additionally, participants with autism took part in a biological motion perception task in which they classified observed movements as ‘natural’ or ‘unnatural’. Results show that individuals with autism moved with atypical kinematics; they did not minimize jerk to the same extent as the matched typical control group, and moved with greater acceleration and velocity. The degree to which kinematics were atypical was correlated with a bias towards perceiving biological motion as ‘unnatural’ and with the severity of autism symptoms as measured by the Autism Diagnostic Observation Schedule. We suggest that fundamental differences in movement kinematics in autism might help to explain their problems with motor control. Additionally, developmental experience of their own atypical kinematic profiles may lead to disrupted perception of others’ actions. PMID:23983031

  17. Visual-vestibular cue integration for heading perception: applications of optimal cue integration theory.

    PubMed

    Fetsch, Christopher R; Deangelis, Gregory C; Angelaki, Dora E

    2010-05-01

    The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.

  18. Disorders of motion and depth.

    PubMed

    Nawrot, Mark

    2003-08-01

    Damage to the human homologue of area MT produces a motion perception deficit similar to that found in the monkey with MT lesions. Even temporary disruption of MT processing with transcranial magnetic stimulation can produce a temporary akinetopsia [127]. Motion perception deficits, however, also are found with a variety of subcortical lesions and other neurologic disorders that can best be described as causing a disconnection within the motion processing stream. The precise role of these subcortical structures, such as the cerebellum, remains to be determined. Simple motion perception, moreover, is only a part of MT function. It undoubtedly has an important role in the perception of depth from motion and stereopsis [112]. Psychophysical studies using aftereffects in normal observers suggest a link between stereo mechanisms and the perception of depth from motion [9-11]. There is even a simple correlation between stereo acuity and the perception of depth from motion [128]. Future studies of patients with cortical lesions will take a closer look at depth perception in association with motion perception and should provide a better understanding of how motion and depth are processed together.

  19. Unconscious decisional learning improves unconscious information processing.

    PubMed

    Vlassova, Alexandra; Pearson, Joel

    2018-07-01

    The idea that unconscious input can result in long-term learning or task improvement has been debated for decades, yet there is still little evidence to suggest that learning outside of awareness can produce meaningful changes to decision-making. Here we trained participants using noisy motion stimuli, which require the gradual accumulation of information until a decision can be reached. These stimuli were suppressed from conscious awareness by simultaneously presenting a dynamic dichoptic mask. We show that a short period of training on either a partially or fully suppressed motion stimulus resulted in improved accuracy when tested on a partially suppressed motion stimulus traveling in the orthogonal direction. We found this improvement occurred even when performance on the training task was at chance. Performance gains generalized across motion directions, suggesting that the improvement was the result of changes to the decisional mechanisms rather than perceptual. Interestingly, unconscious learning had a stronger effect on unconscious, compared to conscious decisional accumulation. We further show that a conscious coherent percept is necessary to reap the benefits of unconscious learning. Together, these data suggest that unconscious decisional processing can be improved via training. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Accumulation of Inertial Sensory Information in the Perception of Whole Body Yaw Rotation.

    PubMed

    Nesti, Alessandro; de Winkel, Ksander; Bülthoff, Heinrich H

    2017-01-01

    While moving through the environment, our central nervous system accumulates sensory information over time to provide an estimate of our self-motion, allowing for completing crucial tasks such as maintaining balance. However, little is known on how the duration of the motion stimuli influences our performances in a self-motion discrimination task. Here we study the human ability to discriminate intensities of sinusoidal (0.5 Hz) self-rotations around the vertical axis (yaw) for four different stimulus durations (1, 2, 3 and 5 s) in darkness. In a typical trial, participants experienced two consecutive rotations of equal duration and different peak amplitude, and reported the one perceived as stronger. For each stimulus duration, we determined the smallest detectable change in stimulus intensity (differential threshold) for a reference velocity of 15 deg/s. Results indicate that differential thresholds decrease with stimulus duration and asymptotically converge to a constant, positive value. This suggests that the central nervous system accumulates sensory information on self-motion over time, resulting in improved discrimination performances. Observed trends in differential thresholds are consistent with predictions based on a drift diffusion model with leaky integration of sensory evidence.

  1. Cognitive Sciences

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Session MP4 includes short reports on: (1) Face Recognition in Microgravity: Is Gravity Direction Involved in the Inversion Effect?; (2) Motor Timing under Microgravity; (3) Perceived Self-Motion Assessed by Computer-Generated Animations: Complexity and Reliability; (4) Prolonged Weightlessness Reference Frames and Visual Symmetry Detection; (5) Mental Representation of Gravity During a Locomotor Task; and (6) Haptic Perception in Weightlessness: A Sense of Force or a Sense of Effort?

  2. Perceptual advantage for category-relevant perceptual dimensions: the case of shape and motion.

    PubMed

    Folstein, Jonathan R; Palmeri, Thomas J; Gauthier, Isabel

    2014-01-01

    Category learning facilitates perception along relevant stimulus dimensions, even when tested in a discrimination task that does not require categorization. While this general phenomenon has been demonstrated previously, perceptual facilitation along dimensions has been documented by measuring different specific phenomena in different studies using different kinds of objects. Across several object domains, there is support for acquired distinctiveness, the stretching of a perceptual dimension relevant to learned categories. Studies using faces and studies using simple separable visual dimensions have also found evidence of acquired equivalence, the shrinking of a perceptual dimension irrelevant to learned categories, and categorical perception, the local stretching across the category boundary. These later two effects are rarely observed with complex non-face objects. Failures to find these effects with complex non-face objects may have been because the dimensions tested previously were perceptually integrated. Here we tested effects of category learning with non-face objects categorized along dimensions that have been found to be processed by different areas of the brain, shape and motion. While we replicated acquired distinctiveness, we found no evidence for acquired equivalence or categorical perception.

  3. Visual motion integration for perception and pursuit

    NASA Technical Reports Server (NTRS)

    Stone, L. S.; Beutter, B. R.; Lorenceau, J.

    2000-01-01

    To examine the relationship between visual motion processing for perception and pursuit, we measured the pursuit eye-movement and perceptual responses to the same complex-motion stimuli. We show that humans can both perceive and pursue the motion of line-figure objects, even when partial occlusion makes the resulting image motion vastly different from the underlying object motion. Our results show that both perception and pursuit can perform largely accurate motion integration, i.e. the selective combination of local motion signals across the visual field to derive global object motion. Furthermore, because we manipulated perceived motion while keeping image motion identical, the observed parallel changes in perception and pursuit show that the motion signals driving steady-state pursuit and perception are linked. These findings disprove current pursuit models whose control strategy is to minimize retinal image motion, and suggest a new framework for the interplay between visual cortex and cerebellum in visuomotor control.

  4. Integration time for the perception of depth from motion parallax.

    PubMed

    Nawrot, Mark; Stroyan, Keith

    2012-04-15

    The perception of depth from relative motion is believed to be a slow process that "builds-up" over a period of observation. However, in the case of motion parallax, the potential accuracy of the depth estimate suffers as the observer translates during the viewing period. Our recent quantitative model for the perception of depth from motion parallax proposes that relative object depth (d) can be determined from retinal image motion (dθ/dt), pursuit eye movement (dα/dt), and fixation distance (f) by the formula: d/f≈dθ/dα. Given the model's dynamics, it is important to know the integration time required by the visual system to recover dα and dθ, and then estimate d. Knowing the minimum integration time reveals the incumbent error in this process. A depth-phase discrimination task was used to determine the time necessary to perceive depth-sign from motion parallax. Observers remained stationary and viewed a briefly translating random-dot motion parallax stimulus. Stimulus duration varied between trials. Fixation on the translating stimulus was monitored and enforced with an eye-tracker. The study found that relative depth discrimination can be performed with presentations as brief as 16.6 ms, with only two stimulus frames providing both retinal image motion and the stimulus window motion for pursuit (mean range=16.6-33.2 ms). This was found for conditions in which, prior to stimulus presentation, the eye was engaged in ongoing pursuit or the eye was stationary. A large high-contrast masking stimulus disrupted depth-discrimination for stimulus presentations less than 70-75 ms in both pursuit and stationary conditions. This interval might be linked to ocular-following response eye-movement latencies. We conclude that neural mechanisms serving depth from motion parallax generate a depth estimate much more quickly than previously believed. We propose that additional sluggishness might be due to the visual system's attempt to determine the maximum dθ/dα ratio for a selection of points on a complicated stimulus. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Residual perception of biological motion in cortical blindness.

    PubMed

    Ruffieux, Nicolas; Ramon, Meike; Lao, Junpeng; Colombo, Françoise; Stacchi, Lisa; Borruat, François-Xavier; Accolla, Ettore; Annoni, Jean-Marie; Caldara, Roberto

    2016-12-01

    From birth, the human visual system shows a remarkable sensitivity for perceiving biological motion. This visual ability relies on a distributed network of brain regions and can be preserved even after damage of high-level ventral visual areas. However, it remains unknown whether this critical biological skill can withstand the loss of vision following bilateral striate damage. To address this question, we tested the categorization of human and animal biological motion in BC, a rare case of cortical blindness after anoxia-induced bilateral striate damage. The severity of his impairment, encompassing various aspects of vision (i.e., color, shape, face, and object recognition) and causing blind-like behavior, contrasts with a residual ability to process motion. We presented BC with static or dynamic point-light displays (PLDs) of human or animal walkers. These stimuli were presented either individually, or in pairs in two alternative forced choice (2AFC) tasks. When confronted with individual PLDs, the patient was unable to categorize the stimuli, irrespective of whether they were static or dynamic. In the 2AFC task, BC exhibited appropriate eye movements towards diagnostic information, but performed at chance level with static PLDs, in stark contrast to his ability to efficiently categorize dynamic biological agents. This striking ability to categorize biological motion provided top-down information is important for at least two reasons. Firstly, it emphasizes the importance of assessing patients' (visual) abilities across a range of task constraints, which can reveal potential residual abilities that may in turn represent a key feature for patient rehabilitation. Finally, our findings reinforce the view that the neural network processing biological motion can efficiently operate despite severely impaired low-level vision, positing our natural predisposition for processing dynamicity in biological agents as a robust feature of human vision. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Ambiguous Tilt and Translation Motion Cues in Astronauts after Space Flight

    NASA Technical Reports Server (NTRS)

    Clement, G.; Harm, D. L.; Rupert, A. H.; Beaton, K. H.; Wood, S. J.

    2008-01-01

    Adaptive changes during space flight in how the brain integrates vestibular cues with visual, proprioceptive, and somatosensory information can lead to impaired movement coordination, vertigo, spatial disorientation, and perceptual illusions following transitions between gravity levels. This joint ESA-NASA pre- and post-flight experiment is designed to examine both the physiological basis and operational implications for disorientation and tilt-translation disturbances in astronauts following short-duration space flights. The first specific aim is to examine the effects of stimulus frequency on adaptive changes in eye movements and motion perception during independent tilt and translation motion profiles. Roll motion is provided by a variable radius centrifuge. Pitch motion is provided by NASA's Tilt-Translation Sled in which the resultant gravitoinertial vector remains aligned with the body longitudinal axis during tilt motion (referred to as the Z-axis gravitoinertial or ZAG paradigm). We hypothesize that the adaptation of otolith-mediated responses to these stimuli will have specific frequency characteristics, being greatest in the mid-frequency range where there is a crossover of tilt and translation. The second specific aim is to employ a closed-loop nulling task in which subjects are tasked to use a joystick to null-out tilt motion disturbances on these two devices. The stimuli consist of random steps or sum-of-sinusoids stimuli, including the ZAG profiles on the Tilt-Translation Sled. We hypothesize that the ability to control tilt orientation will be compromised following space flight, with increased control errors corresponding to changes in self-motion perception. The third specific aim is to evaluate how sensory substitution aids can be used to improve manual control performance. During the closed-loop nulling task on both devices, small tactors placed around the torso vibrate according to the actual body tilt angle relative to gravity. We hypothesize that performance on the closed-loop tilt control task will be improved with this tactile display feedback of tilt orientation. The current plans include testing on eight crewmembers following Space Shuttle missions or short stay onboard the International Space Station. Measurements are obtained pre-flight at L-120 (plus or minus 30), L-90 (plus or minus 30), and L-30, (plus or minus 10) days and post-flight at R+0, R+1, R+2 or 3, R+4 or 5, and R+8 days. Pre-and post-flight testing (from R+1 on) is performed in the Neuroscience Laboratory at the NASA Johnson Space Center on both the Tilt-Translation Device and a variable radius centrifuge. A second variable radius centrifuge, provided by DLR for another joint ESA-NASA project, has been installed at the Baseline Data Collection Facility at Kennedy Space Center to collect data immediately after landing. ZAG was initiated with STS-122/1E and the first post-flight testing will take place after STS-123/1JA landing.

  7. Perceiving Object Shape from Specular Highlight Deformation, Boundary Contour Deformation, and Active Haptic Manipulation.

    PubMed

    Norman, J Farley; Phillips, Flip; Cheeseman, Jacob R; Thomason, Kelsey E; Ronning, Cecilia; Behari, Kriti; Kleinman, Kayla; Calloway, Autum B; Lamirande, Davora

    2016-01-01

    It is well known that motion facilitates the visual perception of solid object shape, particularly when surface texture or other identifiable features (e.g., corners) are present. Conventional models of structure-from-motion require the presence of texture or identifiable object features in order to recover 3-D structure. Is the facilitation in 3-D shape perception similar in magnitude when surface texture is absent? On any given trial in the current experiments, participants were presented with a single randomly-selected solid object (bell pepper or randomly-shaped "glaven") for 12 seconds and were required to indicate which of 12 (for bell peppers) or 8 (for glavens) simultaneously visible objects possessed the same shape. The initial single object's shape was defined either by boundary contours alone (i.e., presented as a silhouette), specular highlights alone, specular highlights combined with boundary contours, or texture. In addition, there was a haptic condition: in this condition, the participants haptically explored with both hands (but could not see) the initial single object for 12 seconds; they then performed the same shape-matching task used in the visual conditions. For both the visual and haptic conditions, motion (rotation in depth or active object manipulation) was present in half of the trials and was not present for the remaining trials. The effect of motion was quantitatively similar for all of the visual and haptic conditions-e.g., the participants' performance in Experiment 1 was 93.5 percent higher in the motion or active haptic manipulation conditions (when compared to the static conditions). The current results demonstrate that deforming specular highlights or boundary contours facilitate 3-D shape perception as much as the motion of objects that possess texture. The current results also indicate that the improvement with motion that occurs for haptics is similar in magnitude to that which occurs for vision.

  8. Perceiving Object Shape from Specular Highlight Deformation, Boundary Contour Deformation, and Active Haptic Manipulation

    PubMed Central

    Cheeseman, Jacob R.; Thomason, Kelsey E.; Ronning, Cecilia; Behari, Kriti; Kleinman, Kayla; Calloway, Autum B.; Lamirande, Davora

    2016-01-01

    It is well known that motion facilitates the visual perception of solid object shape, particularly when surface texture or other identifiable features (e.g., corners) are present. Conventional models of structure-from-motion require the presence of texture or identifiable object features in order to recover 3-D structure. Is the facilitation in 3-D shape perception similar in magnitude when surface texture is absent? On any given trial in the current experiments, participants were presented with a single randomly-selected solid object (bell pepper or randomly-shaped “glaven”) for 12 seconds and were required to indicate which of 12 (for bell peppers) or 8 (for glavens) simultaneously visible objects possessed the same shape. The initial single object’s shape was defined either by boundary contours alone (i.e., presented as a silhouette), specular highlights alone, specular highlights combined with boundary contours, or texture. In addition, there was a haptic condition: in this condition, the participants haptically explored with both hands (but could not see) the initial single object for 12 seconds; they then performed the same shape-matching task used in the visual conditions. For both the visual and haptic conditions, motion (rotation in depth or active object manipulation) was present in half of the trials and was not present for the remaining trials. The effect of motion was quantitatively similar for all of the visual and haptic conditions–e.g., the participants’ performance in Experiment 1 was 93.5 percent higher in the motion or active haptic manipulation conditions (when compared to the static conditions). The current results demonstrate that deforming specular highlights or boundary contours facilitate 3-D shape perception as much as the motion of objects that possess texture. The current results also indicate that the improvement with motion that occurs for haptics is similar in magnitude to that which occurs for vision. PMID:26863531

  9. The functional and structural asymmetries of the superior temporal sulcus.

    PubMed

    Specht, Karsten; Wigglesworth, Philip

    2018-02-01

    The superior temporal sulcus (STS) is an anatomical structure that increasingly interests researchers. This structure appears to receive multisensory input and is involved in several perceptual and cognitive core functions, such as speech perception, audiovisual integration, (biological) motion processing and theory of mind capacities. In addition, the superior temporal sulcus is not only one of the longest sulci of the brain, but it also shows marked functional and structural asymmetries, some of which have only been found in humans. To explore the functional-structural relationships of these asymmetries in more detail, this study combines functional and structural magnetic resonance imaging. Using a speech perception task, an audiovisual integration task, and a theory of mind task, this study again demonstrated an involvement of the STS in these processes, with an expected strong leftward asymmetry for the speech perception task. Furthermore, this study confirmed the earlier described, human-specific asymmetries, namely that the left STS is longer than the right STS and that the right STS is deeper than the left STS. However, this study did not find any relationship between these structural asymmetries and the detected brain activations or their functional asymmetries. This can, on the other hand, give further support to the notion that the structural asymmetry of the STS is not directly related to the functional asymmetry of the speech perception and the language system as a whole, but that it may have other causes and functions. © 2018 The Authors. Scandinavian Journal of Psychology published by Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  10. Influence of Visual Motion, Suggestion, and Illusory Motion on Self-Motion Perception in the Horizontal Plane.

    PubMed

    Rosenblatt, Steven David; Crane, Benjamin Thomas

    2015-01-01

    A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.

  11. Motion sickness increases functional connectivity between visual motion and nausea-associated brain regions.

    PubMed

    Toschi, Nicola; Kim, Jieun; Sclocco, Roberta; Duggento, Andrea; Barbieri, Riccardo; Kuo, Braden; Napadow, Vitaly

    2017-01-01

    The brain networks supporting nausea not yet understood. We previously found that while visual stimulation activated primary (V1) and extrastriate visual cortices (MT+/V5, coding for visual motion), increasing nausea was associated with increasing sustained activation in several brain areas, with significant co-activation for anterior insula (aIns) and mid-cingulate (MCC) cortices. Here, we hypothesized that motion sickness also alters functional connectivity between visual motion and previously identified nausea-processing brain regions. Subjects prone to motion sickness and controls completed a motion sickness provocation task during fMRI/ECG acquisition. We studied changes in connectivity between visual processing areas activated by the stimulus (MT+/V5, V1), right aIns and MCC when comparing rest (BASELINE) to peak nausea state (NAUSEA). Compared to BASELINE, NAUSEA reduced connectivity between right and left V1 and increased connectivity between right MT+/V5 and aIns and between left MT+/V5 and MCC. Additionally, the change in MT+/V5 to insula connectivity was significantly associated with a change in sympathovagal balance, assessed by heart rate variability analysis. No state-related connectivity changes were noted for the control group. Increased connectivity between a visual motion processing region and nausea/salience brain regions may reflect increased transfer of visual/vestibular mismatch information to brain regions supporting nausea perception and autonomic processing. We conclude that vection-induced nausea increases connectivity between nausea-processing regions and those activated by the nauseogenic stimulus. This enhanced low-frequency coupling may support continual, slowly evolving nausea perception and shifts toward sympathetic dominance. Disengaging this coupling may be a target for biobehavioral interventions aimed at reducing motion sickness severity. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. The neurophysiology of biological motion perception in schizophrenia

    PubMed Central

    Jahshan, Carol; Wynn, Jonathan K; Mathis, Kristopher I; Green, Michael F

    2015-01-01

    Introduction The ability to recognize human biological motion is a fundamental aspect of social cognition that is impaired in people with schizophrenia. However, little is known about the neural substrates of impaired biological motion perception in schizophrenia. In the current study, we assessed event-related potentials (ERPs) to human and nonhuman movement in schizophrenia. Methods Twenty-four subjects with schizophrenia and 18 healthy controls completed a biological motion task while their electroencephalography (EEG) was simultaneously recorded. Subjects watched clips of point-light animations containing 100%, 85%, or 70% biological motion, and were asked to decide whether the clip resembled human or nonhuman movement. Three ERPs were examined: P1, N1, and the late positive potential (LPP). Results Behaviorally, schizophrenia subjects identified significantly fewer stimuli as human movement compared to healthy controls in the 100% and 85% conditions. At the neural level, P1 was reduced in the schizophrenia group but did not differ among conditions in either group. There were no group differences in N1 but both groups had the largest N1 in the 70% condition. There was a condition × group interaction for the LPP: Healthy controls had a larger LPP to 100% versus 85% and 70% biological motion; there was no difference among conditions in schizophrenia subjects. Conclusions Consistent with previous findings, schizophrenia subjects were impaired in their ability to recognize biological motion. The EEG results showed that biological motion did not influence the earliest stage of visual processing (P1). Although schizophrenia subjects showed the same pattern of N1 results relative to healthy controls, they were impaired at a later stage (LPP), reflecting a dysfunction in the identification of human form in biological versus nonbiological motion stimuli. PMID:25722951

  13. Perceptual Measurement in Schizophrenia: Promising Electrophysiology and Neuroimaging Paradigms From CNTRICS

    PubMed Central

    Butler, Pamela D.; Chen, Yue; Ford, Judith M.; Geyer, Mark A.; Silverstein, Steven M.; Green, Michael F.

    2012-01-01

    The sixth meeting of the Cognitive Neuroscience Treatment Research to Improve Cognition in Schizophrenia (CNTRICS) focused on selecting promising imaging paradigms for each of the cognitive constructs selected in the first CNTRICS meeting. In the domain of perception, the 2 constructs of interest were “gain control” and “visual integration.” CNTRICS received 6 task nominations for imaging paradigms for gain control and 3 task nominations for integration. The breakout group for perception evaluated the degree to which each of these tasks met prespecified criteria. For gain control, the breakout group believed that one task (mismatch negativity) was already mature and was being incorporated into multisite clinical trials. The breakout group recommended that 1 visual task (steady-state visual evoked potentials to magnocellular- vs parvocellular-biased stimuli) and 2 auditory measures (an event-related potential (ERP) measure of corollary discharge and a functional magnetic resonance imaging (fMRI) version of prepulse inhibition of startle) be adapted for use in clinical trials in schizophrenia research. For visual integration, the breakout group recommended that fMRI and ERP versions of a contour integration test and an fMRI version of a coherent motion test be adapted for use in clinical trials. This manuscript describes the ways in which each of these tasks met the criteria used in the breakout group to evaluate and recommend tasks for further development. PMID:21890745

  14. Motion perception and driving: predicting performance through testing and shortening braking reaction times through training.

    PubMed

    Wilkins, Luke; Gray, Rob; Gaska, James; Winterbottom, Marc

    2013-12-30

    A driving simulator was used to examine the relationship between motion perception and driving performance. Although motion perception test scores have been shown to be related to driving safety, it is not clear which combination of tests are the best predictors and whether motion perception training can improve driving performance. In experiment 1, 60 younger drivers (22.4 ± 2.5 years) completed three motion perception tests (2-dimensional [2D] motion-defined letter [MDL] identification, 3D motion in depth sensitivity [MID], and dynamic visual acuity [DVA]) followed by two driving tests (emergency braking [EB] and hazard perception [HP]). In experiment 2, 20 drivers (21.6 ± 2.1 years) completed 6 weeks of motion perception training (using the MDL, MID, and DVA tests), while 20 control drivers (22.0 ± 2.7 years) completed an online driving safety course. The EB performance was measured before and after training. In experiment 1, MDL (r = 0.34) and MID (r = 0.46) significantly correlated with EB score. The change in DVA score as a function of target speed (i.e., "velocity susceptibility") was correlated most strongly with HP score (r = -0.61). In experiment 2, the motion perception training group had a significant decrease in brake reaction time on the EB test from pre- to posttreatment, while there was no significant change for the control group: t(38) = 2.24, P = 0.03. Tests of 3D motion perception are the best predictor of EB, while DVA velocity susceptibility is the best predictor of hazard perception. Motion perception training appears to result in faster braking responses.

  15. Introduction to and Review of Simulator Sickness Research

    DTIC Science & Technology

    2005-04-01

    other sensory systems play a role in the perception of motion. Kinesthetic receptors in the joints, muscles, and tendons signal limb, head, and body...is in general agreement about which categories of individuals are more susceptible than others. Gender . Females are reported to be more susceptible...and task variables. Pausch et al. (1992) reviewed several factors that evoke SS, with special emphasis given to simulator design issues. Gender . As

  16. Objective Fidelity Evaluation in Multisensory Virtual Environments: Auditory Cue Fidelity in Flight Simulation

    PubMed Central

    Meyer, Georg F.; Wong, Li Ting; Timson, Emma; Perfect, Philip; White, Mark D.

    2012-01-01

    We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues. PMID:22957068

  17. Acoustic facilitation of object movement detection during self-motion

    PubMed Central

    Calabro, F. J.; Soto-Faraco, S.; Vaina, L. M.

    2011-01-01

    In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations. PMID:21307050

  18. Adaptation to visual or auditory time intervals modulates the perception of visual apparent motion

    PubMed Central

    Zhang, Huihui; Chen, Lihan; Zhou, Xiaolin

    2012-01-01

    It is debated whether sub-second timing is subserved by a centralized mechanism or by the intrinsic properties of task-related neural activity in specific modalities (Ivry and Schlerf, 2008). By using a temporal adaptation task, we investigated whether adapting to different time intervals conveyed through stimuli in different modalities (i.e., frames of a visual Ternus display, visual blinking discs, or auditory beeps) would affect the subsequent implicit perception of visual timing, i.e., inter-stimulus interval (ISI) between two frames in a Ternus display. The Ternus display can induce two percepts of apparent motion (AM), depending on the ISI between the two frames: “element motion” for short ISIs, in which the endmost disc is seen as moving back and forth while the middle disc at the overlapping or central position remains stationary; “group motion” for longer ISIs, in which both discs appear to move in a manner of lateral displacement as a whole. In Experiment 1, participants adapted to either the typical “element motion” (ISI = 50 ms) or the typical “group motion” (ISI = 200 ms). In Experiments 2 and 3, participants adapted to a time interval of 50 or 200 ms through observing a series of two paired blinking discs at the center of the screen (Experiment 2) or hearing a sequence of two paired beeps (with pitch 1000 Hz). In Experiment 4, participants adapted to sequences of paired beeps with either low pitches (500 Hz) or high pitches (5000 Hz). After adaptation in each trial, participants were presented with a Ternus probe in which the ISI between the two frames was equal to the transitional threshold of the two types of motions, as determined by a pretest. Results showed that adapting to the short time interval in all the situations led to more reports of “group motion” in the subsequent Ternus probes; adapting to the long time interval, however, caused no aftereffect for visual adaptation but significantly more reports of group motion for auditory adaptation. These findings, suggesting amodal representation for sub-second timing across modalities, are interpreted in the framework of temporal pacemaker model. PMID:23133408

  19. Evidence that primary visual cortex is required for image, orientation, and motion discrimination by rats.

    PubMed

    Petruno, Sarah K; Clark, Robert E; Reinagel, Pamela

    2013-01-01

    The pigmented Long-Evans rat has proven to be an excellent subject for studying visually guided behavior including quantitative visual psychophysics. This observation, together with its experimental accessibility and its close homology to the mouse, has made it an attractive model system in which to dissect the thalamic and cortical circuits underlying visual perception. Given that visually guided behavior in the absence of primary visual cortex has been described in the literature, however, it is an empirical question whether specific visual behaviors will depend on primary visual cortex in the rat. Here we tested the effects of cortical lesions on performance of two-alternative forced-choice visual discriminations by Long-Evans rats. We present data from one highly informative subject that learned several visual tasks and then received a bilateral lesion ablating >90% of primary visual cortex. After the lesion, this subject had a profound and persistent deficit in complex image discrimination, orientation discrimination, and full-field optic flow motion discrimination, compared with both pre-lesion performance and sham-lesion controls. Performance was intact, however, on another visual two-alternative forced-choice task that required approaching a salient visual target. A second highly informative subject learned several visual tasks prior to receiving a lesion ablating >90% of medial extrastriate cortex. This subject showed no impairment on any of the four task categories. Taken together, our data provide evidence that these image, orientation, and motion discrimination tasks require primary visual cortex in the Long-Evans rat, whereas approaching a salient visual target does not.

  20. Perception-based synthetic cueing for night vision device rotorcraft hover operations

    NASA Astrophysics Data System (ADS)

    Bachelder, Edward N.; McRuer, Duane

    2002-08-01

    Helicopter flight using night-vision devices (NVDs) is difficult to perform, as evidenced by the high accident rate associated with NVD flight compared to day operation. The approach proposed in this paper is to augment the NVD image with synthetic cueing, whereby the cues would emulate position and motion and appear to be actually occurring in physical space on which they are overlaid. Synthetic cues allow for selective enhancement of perceptual state gains to match the task requirements. A hover cue set was developed based on an analogue of a physical target used in a flight handling qualities tracking task, a perceptual task analysis for hover, and fundamentals of human spatial perception. The display was implemented on a simulation environment, constructed using a virtual reality device, an ultrasound head-tracker, and a fixed-base helicopter simulator. Seven highly trained helicopter pilots were used as experimental subjects and tasked to maintain hover in the presence of aircraft positional disturbances while viewing a synthesized NVD environment and the experimental hover cues. Significant performance improvements were observed when using synthetic cue augmentation. This paper demonstrates that artificial magnification of perceptual states through synthetic cueing can be an effective method of improving night-vision helicopter hover operations.

  1. Guilt leads to enhanced facing-the-viewer bias

    PubMed Central

    Shen, Mowei; Zhu, Chengfeng; Liao, Huayu; Zhang, Haihang; Zhou, Jifan

    2018-01-01

    As an important moral emotion, guilt plays a critical role in social interaction. It has been found that people tended to exhibit prosocial behavior under circumstances of guilt. However, all extant studies have predominantly focused on the influence of guilt on macro-level behavior. So far, no study has investigated whether guilt affects people’s micro-level perception. The current study closes this gap by examining whether guilt affects one’s inclination to perceive approaching motion. We achieved this aim by probing a facing-the-viewer bias (FTV bias). Specifically, when an ambiguous walking biological motion display is presented to participants via the point-light display technique, participants tend to perceive a walking agent approaching them. We hypothesized that guilt modulated FTV bias. To test this hypothesis, we adopted a two-person situation induction task to induce guilt, whereby participants were induced to feel that because of their poor task performance, their partner did not receive a satisfactory payment. We found that when participants were told that the perceived biological motion was motion-captured from their partner, the FTV bias was significantly increased for guilty participants relative to neutral participants. However, when participants were informed that the perceived biological motion was from a third neutral agent, the FTV bias was not modulated by guilt. These results suggest that guilt influences one’s inclination to perceive approaching motion, but this effect is constrained to the person towards whom guilt is directed. PMID:29649338

  2. Individualistic weight perception from motion on a slope

    PubMed Central

    Zintus-art, K.; Shin, D.; Kambara, H.; Yoshimura, N.; Koike, Y.

    2016-01-01

    Perception of an object’s weight is linked to its form and motion. Studies have shown the relationship between weight perception and motion in horizontal and vertical environments to be universally identical across subjects during passive observation. Here we show a contradicting finding in that not all humans share the same motion-weight pairing. A virtual environment where participants control the steepness of a slope was used to investigate the relationship between sliding motion and weight perception. Our findings showed that distinct, albeit subjective, motion-weight relationships in perception could be identified for slope environments. These individualistic perceptions were found when changes in environmental parameters governing motion were introduced, specifically inclination and surface texture. Differences in environmental parameters, combined with individual factors such as experience, affected participants’ weight perception. This phenomenon may offer evidence of the central nervous system’s ability to choose and combine internal models based on information from the sensory system. The results also point toward the possibility of controlling human perception by presenting strong sensory cues to manipulate the mechanisms managing internal models. PMID:27174036

  3. Being Moved by the Self and Others: Influence of Empathy on Self-Motion Perception

    PubMed Central

    Lopez, Christophe; Falconer, Caroline J.; Mast, Fred W.

    2013-01-01

    Background The observation of conspecifics influences our bodily perceptions and actions: Contagious yawning, contagious itching, or empathy for pain, are all examples of mechanisms based on resonance between our own body and others. While there is evidence for the involvement of the mirror neuron system in the processing of motor, auditory and tactile information, it has not yet been associated with the perception of self-motion. Methodology/Principal Findings We investigated whether viewing our own body, the body of another, and an object in motion influences self-motion perception. We found a visual-vestibular congruency effect for self-motion perception when observing self and object motion, and a reduction in this effect when observing someone else's body motion. The congruency effect was correlated with empathy scores, revealing the importance of empathy in mirroring mechanisms. Conclusions/Significance The data show that vestibular perception is modulated by agent-specific mirroring mechanisms. The observation of conspecifics in motion is an essential component of social life, and self-motion perception is crucial for the distinction between the self and the other. Finally, our results hint at the presence of a “vestibular mirror neuron system”. PMID:23326302

  4. Tracking without perceiving: a dissociation between eye movements and motion perception.

    PubMed

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2011-02-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept.

  5. Tracking Without Perceiving: A Dissociation Between Eye Movements and Motion Perception

    PubMed Central

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2011-01-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept. PMID:21189353

  6. Contrast and assimilation in motion perception and smooth pursuit eye movements.

    PubMed

    Spering, Miriam; Gegenfurtner, Karl R

    2007-09-01

    The analysis of visual motion serves many different functions ranging from object motion perception to the control of self-motion. The perception of visual motion and the oculomotor tracking of a moving object are known to be closely related and are assumed to be controlled by shared brain areas. We compared perceived velocity and the velocity of smooth pursuit eye movements in human observers in a paradigm that required the segmentation of target object motion from context motion. In each trial, a pursuit target and a visual context were independently perturbed simultaneously to briefly increase or decrease in speed. Observers had to accurately track the target and estimate target speed during the perturbation interval. Here we show that the same motion signals are processed in fundamentally different ways for perception and steady-state smooth pursuit eye movements. For the computation of perceived velocity, motion of the context was subtracted from target motion (motion contrast), whereas pursuit velocity was determined by the motion average (motion assimilation). We conclude that the human motion system uses these computations to optimally accomplish different functions: image segmentation for object motion perception and velocity estimation for the control of smooth pursuit eye movements.

  7. Sparse Coding of Natural Human Motion Yields Eigenmotions Consistent Across People

    NASA Astrophysics Data System (ADS)

    Thomik, Andreas; Faisal, A. Aldo

    2015-03-01

    Providing a precise mathematical description of the structure of natural human movement is a challenging problem. We use a data-driven approach to seek a generative model of movement capturing the underlying simplicity of spatial and temporal structure of behaviour observed in daily life. In perception, the analysis of natural scenes has shown that sparse codes of such scenes are information theoretic efficient descriptors with direct neuronal correlates. Translating from perception to action, we identify a generative model of movement generation by the human motor system. Using wearable full-hand motion capture, we measure the digit movement of the human hand in daily life. We learn a dictionary of ``eigenmotions'' which we use for sparse encoding of the movement data. We show that the dictionaries are generally well preserved across subjects with small deviations accounting for individuality of the person and variability in tasks. Further, the dictionary elements represent motions which can naturally describe hand movements. Our findings suggest the motor system can compose complex movement behaviours out of the spatially and temporally sparse activation of ``eigenmotion'' neurons, and is consistent with data on grasp-type specificity of specialised neurons in the premotor cortex. Andreas is supported by the Luxemburg Research Fund (1229297).

  8. Load-sensitive impairment of working memory for biological motion in schizophrenia.

    PubMed

    Lee, Hannah; Kim, Jejoong

    2017-01-01

    Impaired working memory (WM) is a core cognitive deficit in schizophrenia. Nevertheless, past studies have reported that patients may also benefit from increasing salience of memory stimuli. Such efficient encoding largely depends upon precise perception. Thus an investigation on the relationship between perceptual processing and WM would be worthwhile. Here, we used biological motion (BM), a socially relevant stimulus that schizophrenics have difficulty discriminating from similar meaningless motions, in a delayed-response task. Non-BM stimuli and static polygons were also used for comparison. In each trial, one of the three types of stimuli was presented followed by two probes, with a short delay in between. Participants were asked to indicate whether one of them was identical to the memory item or both were novel. The number of memory items was one or two. Healthy controls were more accurate in recognizing BM than non-BM regardless of memory loads. Patients with schizophrenia exhibited similar accuracy patterns to those of controls in the Load 1 condition only. These results suggest that information contained in BM could facilitate WM encoding in general, but the effect is vulnerable to the increase of cognitive load in schizophrenia, implying inefficient encoding driven by imprecise perception.

  9. A research on motion design for APP's loading pages based on time perception

    NASA Astrophysics Data System (ADS)

    Cao, Huai; Hu, Xiaoyun

    2018-04-01

    Due to restrictions caused by objective reasons like network bandwidth, hardware performance and etc., waiting is still an inevitable phenomenon that appears in our using mobile-terminal products. Relevant researches show that users' feelings in a waiting scenario can affect their evaluations on the whole product and services the product provides. With the development of user experience and inter-facial design subjects, the role of motion effect in the interface design has attracted more and more scholars' attention. In the current studies, the research theory of motion design in a waiting scenario is imperfect. This article will use the basic theory and experimental research methods of cognitive psychology to explore the motion design's impact on user's time perception when users are waiting for loading APP pages. Firstly, the article analyzes the factors that affect waiting experience of loading APP pages based on the theory of time perception, and then discusses motion design's impact on the level of time-perception when loading pages and its design strategy. Moreover, by the operation analysis of existing loading motion designs, the article classifies the existing loading motions and designs an experiment to verify the impact of different types of motions on the user's time perception. The result shows that the waiting time perception of mobile's terminals' APPs is related to the loading motion types, the combination type of loading motions can effectively shorten the waiting time perception as it scores a higher mean value in the length of time perception.

  10. Spatiotemporal Filter for Visual Motion Integration from Pursuit Eye Movements in Humans and Monkeys

    PubMed Central

    Liu, Bing

    2017-01-01

    Despite the enduring interest in motion integration, a direct measure of the space–time filter that the brain imposes on a visual scene has been elusive. This is perhaps because of the challenge of estimating a 3D function from perceptual reports in psychophysical tasks. We take a different approach. We exploit the close connection between visual motion estimates and smooth pursuit eye movements to measure stimulus–response correlations across space and time, computing the linear space–time filter for global motion direction in humans and monkeys. Although derived from eye movements, we find that the filter predicts perceptual motion estimates quite well. To distinguish visual from motor contributions to the temporal duration of the pursuit motion filter, we recorded single-unit responses in the monkey middle temporal cortical area (MT). We find that pursuit response delays are consistent with the distribution of cortical neuron latencies and that temporal motion integration for pursuit is consistent with a short integration MT subpopulation. Remarkably, the visual system appears to preferentially weight motion signals across a narrow range of foveal eccentricities rather than uniformly over the whole visual field, with a transiently enhanced contribution from locations along the direction of motion. We find that the visual system is most sensitive to motion falling at approximately one-third the radius of the stimulus aperture. Hypothesizing that the visual drive for pursuit is related to the filtered motion energy in a motion stimulus, we compare measured and predicted eye acceleration across several other target forms. SIGNIFICANCE STATEMENT A compact model of the spatial and temporal processing underlying global motion perception has been elusive. We used visually driven smooth eye movements to find the 3D space–time function that best predicts both eye movements and perception of translating dot patterns. We found that the visual system does not appear to use all available motion signals uniformly, but rather weights motion preferentially in a narrow band at approximately one-third the radius of the stimulus. Although not universal, the filter predicts responses to other types of stimuli, demonstrating a remarkable degree of generalization that may lead to a deeper understanding of visual motion processing. PMID:28003348

  11. When eyes drive hand: Influence of non-biological motion on visuo-motor coupling.

    PubMed

    Thoret, Etienne; Aramaki, Mitsuko; Bringoux, Lionel; Ystad, Sølvi; Kronland-Martinet, Richard

    2016-01-26

    Many studies stressed that the human movement execution but also the perception of motion are constrained by specific kinematics. For instance, it has been shown that the visuo-manual tracking of a spotlight was optimal when the spotlight motion complies with biological rules such as the so-called 1/3 power law, establishing the co-variation between the velocity and the trajectory curvature of the movement. The visual or kinesthetic perception of a geometry induced by motion has also been shown to be constrained by such biological rules. In the present study, we investigated whether the geometry induced by the visuo-motor coupling of biological movements was also constrained by the 1/3 power law under visual open loop control, i.e. without visual feedback of arm displacement. We showed that when someone was asked to synchronize a drawing movement with a visual spotlight following a circular shape, the geometry of the reproduced shape was fooled by visual kinematics that did not respect the 1/3 power law. In particular, elliptical shapes were reproduced when the circle is trailed with a kinematics corresponding to an ellipse. Moreover, the distortions observed here were larger than in the perceptual tasks stressing the role of motor attractors in such a visuo-motor coupling. Finally, by investigating the direct influence of visual kinematics on the motor reproduction, our result conciliates previous knowledge on sensorimotor coupling of biological motions with external stimuli and gives evidence to the amodal encoding of biological motion. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. Improved Visual Cognition through Stroboscopic Training

    PubMed Central

    Appelbaum, L. Gregory; Schroeder, Julia E.; Cain, Matthew S.; Mitroff, Stephen R.

    2011-01-01

    Humans have a remarkable capacity to learn and adapt, but surprisingly little research has demonstrated generalized learning in which new skills and strategies can be used flexibly across a range of tasks and contexts. In the present work we examined whether generalized learning could result from visual–motor training under stroboscopic visual conditions. Individuals were assigned to either an experimental condition that trained with stroboscopic eyewear or to a control condition that underwent identical training with non-stroboscopic eyewear. The training consisted of multiple sessions of athletic activities during which participants performed simple drills such as throwing and catching. To determine if training led to generalized benefits, we used computerized measures to assess perceptual and cognitive abilities on a variety of tasks before and after training. Computer-based assessments included measures of visual sensitivity (central and peripheral motion coherence thresholds), transient spatial attention (a useful field of view – dual task paradigm), and sustained attention (multiple-object tracking). Results revealed that stroboscopic training led to significantly greater re-test improvement in central visual field motion sensitivity and transient attention abilities. No training benefits were observed for peripheral motion sensitivity or peripheral transient attention abilities, nor were benefits seen for sustained attention during multiple-object tracking. These findings suggest that stroboscopic training can effectively improve some, but not all aspects of visual perception and attention. PMID:22059078

  13. Neural Correlates of Coherent and Biological Motion Perception in Autism

    ERIC Educational Resources Information Center

    Koldewyn, Kami; Whitney, David; Rivera, Susan M.

    2011-01-01

    Recent evidence suggests those with autism may be generally impaired in visual motion perception. To examine this, we investigated both coherent and biological motion processing in adolescents with autism employing both psychophysical and fMRI methods. Those with autism performed as well as matched controls during coherent motion perception but…

  14. The Perception of Auditory Motion

    PubMed Central

    Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  15. Directional Limits on Motion Transparency Assessed Through Colour-Motion Binding.

    PubMed

    Maloney, Ryan T; Clifford, Colin W G; Mareschal, Isabelle

    2018-03-01

    Motion-defined transparency is the perception of two or more distinct moving surfaces at the same retinal location. We explored the limits of motion transparency using superimposed surfaces of randomly positioned dots defined by differences in motion direction and colour. In one experiment, dots were red or green and we varied the proportion of dots of a single colour that moved in a single direction ('colour-motion coherence') and measured the threshold direction difference for discriminating between two directions. When colour-motion coherences were high (e.g., 90% of red dots moving in one direction), a smaller direction difference was required to correctly bind colour with direction than at low coherences. In another experiment, we varied the direction difference between the surfaces and measured the threshold colour-motion coherence required to discriminate between them. Generally, colour-motion coherence thresholds decreased with increasing direction differences, stabilising at direction differences around 45°. Different stimulus durations were compared, and thresholds were higher at the shortest (150 ms) compared with the longest (1,000 ms) duration. These results highlight different yet interrelated aspects of the task and the fundamental limits of the mechanisms involved: the resolution of narrowly separated directions in motion processing and the local sampling of dot colours from each surface.

  16. Discrimination of curvature from motion during smooth pursuit eye movements and fixation.

    PubMed

    Ross, Nicholas M; Goettker, Alexander; Schütz, Alexander C; Braun, Doris I; Gegenfurtner, Karl R

    2017-09-01

    Smooth pursuit and motion perception have mainly been investigated with stimuli moving along linear trajectories. Here we studied the quality of pursuit movements to curved motion trajectories in human observers and examined whether the pursuit responses would be sensitive enough to discriminate various degrees of curvature. In a two-interval forced-choice task subjects pursued a Gaussian blob moving along a curved trajectory and then indicated in which interval the curve was flatter. We also measured discrimination thresholds for the same curvatures during fixation. Motion curvature had some specific effects on smooth pursuit properties: trajectories with larger amounts of curvature elicited lower open-loop acceleration, lower pursuit gain, and larger catch-up saccades compared with less curved trajectories. Initially, target motion curvatures were underestimated; however, ∼300 ms after pursuit onset pursuit responses closely matched the actual curved trajectory. We calculated perceptual thresholds for curvature discrimination, which were on the order of 1.5 degrees of visual angle (°) for a 7.9° curvature standard. Oculometric sensitivity to curvature discrimination based on the whole pursuit trajectory was quite similar to perceptual performance. Oculometric thresholds based on smaller time windows were higher. Thus smooth pursuit can quite accurately follow moving targets with curved trajectories, but temporal integration over longer periods is necessary to reach perceptual thresholds for curvature discrimination. NEW & NOTEWORTHY Even though motion trajectories in the real world are frequently curved, most studies of smooth pursuit and motion perception have investigated linear motion. We show that pursuit initially underestimates the curvature of target motion and is able to reproduce the target curvature ∼300 ms after pursuit onset. Temporal integration of target motion over longer periods is necessary for pursuit to reach the level of precision found in perceptual discrimination of curvature. Copyright © 2017 the American Physiological Society.

  17. Ventral aspect of the visual form pathway is not critical for the perception of biological motion

    PubMed Central

    Gilaie-Dotan, Sharon; Saygin, Ayse Pinar; Lorenzi, Lauren J.; Rees, Geraint; Behrmann, Marlene

    2015-01-01

    Identifying the movements of those around us is fundamental for many daily activities, such as recognizing actions, detecting predators, and interacting with others socially. A key question concerns the neurobiological substrates underlying biological motion perception. Although the ventral “form” visual cortex is standardly activated by biologically moving stimuli, whether these activations are functionally critical for biological motion perception or are epiphenomenal remains unknown. To address this question, we examined whether focal damage to regions of the ventral visual cortex, resulting in significant deficits in form perception, adversely affects biological motion perception. Six patients with damage to the ventral cortex were tested with sensitive point-light display paradigms. All patients were able to recognize unmasked point-light displays and their perceptual thresholds were not significantly different from those of three different control groups, one of which comprised brain-damaged patients with spared ventral cortex (n > 50). Importantly, these six patients performed significantly better than patients with damage to regions critical for biological motion perception. To assess the necessary contribution of different regions in the ventral pathway to biological motion perception, we complement the behavioral findings with a fine-grained comparison between the lesion location and extent, and the cortical regions standardly implicated in biological motion processing. This analysis revealed that the ventral aspects of the form pathway (e.g., fusiform regions, ventral extrastriate body area) are not critical for biological motion perception. We hypothesize that the role of these ventral regions is to provide enhanced multiview/posture representations of the moving person rather than to represent biological motion perception per se. PMID:25583504

  18. [Contribution of the study of singing in tune in musically non-expert subjects: importance of short term memory of the pitch (19 to 28 year-old subjects)].

    PubMed

    Belin, S; Peuvergne, A; Sarfati, J

    2005-01-01

    In the singing, which requires precise knowledge of the relevant musical code in use, accuracy of intonation plays a central role. Singing in tune requires to perceive pitch precisely and to memorize it before planning and executing the accurate vocal motion, which allows the exact emission of the correct pitch. Our work investigated the role of short term memory of pitch on singing accuracy. For that purpose, the experimental protocol of Deutsch (1970) was adapted for a perception and a production task. Participants were selected for their singing accuracy and separated into two groups of ten singing in tune and ten out-of-tune. All participants perceived pitch height exactly and were musically non-experts. For the perception and the production tasks, participants had to either compare or reproduce single pitches or two-pitch-sets. For the perception task, participants had to compare either single pitches or two-pitch patterns, all separated by a five seconds delay. For the production task, participants had to reproduce either single pitches or two-pitch patterns after a five seconds delay. The five seconds delay was either filled with intervening numbers, or with intervening tones, or without any disturbing sound. In perception and production task, the presence of intervening tones disturbs deeply the success of the subjects for every trial. Performance of the in-tune singing group is better for all the exercises while the other group had difficulties on single pitches and two-pitch patterns and was more disturbed by the effect of the intervening material. The outcome suggests that short term memory of pitch and accuracy of intonation would be closely linked. Further research needs to specify if that would mean that troubles in singing in tune are a consequence of a low-efficient short term memory of pitch, or if that troubles would hold up the right construction of the short term memory of pitch.

  19. Individual differences in the perception of biological motion and fragmented figures are not correlated

    PubMed Central

    Jung, Eunice L.; Zadbood, Asieh; Lee, Sang-Hun; Tomarken, Andrew J.; Blake, Randolph

    2013-01-01

    We live in a cluttered, dynamic visual environment that poses a challenge for the visual system: for objects, including those that move about, to be perceived, information specifying those objects must be integrated over space and over time. Does a single, omnibus mechanism perform this grouping operation, or does grouping depend on separate processes specialized for different feature aspects of the object? To address this question, we tested a large group of healthy young adults on their abilities to perceive static fragmented figures embedded in noise and to perceive dynamic point-light biological motion figures embedded in dynamic noise. There were indeed substantial individual differences in performance on both tasks, but none of the statistical tests we applied to this data set uncovered a significant correlation between those performance measures. These results suggest that the two tasks, despite their superficial similarity, require different segmentation and grouping processes that are largely unrelated to one another. Whether those processes are embodied in distinct neural mechanisms remains an open question. PMID:24198799

  20. Individual differences in the perception of biological motion and fragmented figures are not correlated.

    PubMed

    Jung, Eunice L; Zadbood, Asieh; Lee, Sang-Hun; Tomarken, Andrew J; Blake, Randolph

    2013-01-01

    WE LIVE IN A CLUTTERED, DYNAMIC VISUAL ENVIRONMENT THAT POSES A CHALLENGE FOR THE VISUAL SYSTEM: for objects, including those that move about, to be perceived, information specifying those objects must be integrated over space and over time. Does a single, omnibus mechanism perform this grouping operation, or does grouping depend on separate processes specialized for different feature aspects of the object? To address this question, we tested a large group of healthy young adults on their abilities to perceive static fragmented figures embedded in noise and to perceive dynamic point-light biological motion figures embedded in dynamic noise. There were indeed substantial individual differences in performance on both tasks, but none of the statistical tests we applied to this data set uncovered a significant correlation between those performance measures. These results suggest that the two tasks, despite their superficial similarity, require different segmentation and grouping processes that are largely unrelated to one another. Whether those processes are embodied in distinct neural mechanisms remains an open question.

  1. The influence of an immersive virtual environment on the segmental organization of postural stabilizing responses.

    PubMed

    Keshner, E A; Kenyon, R V

    2000-01-01

    We examined the effect of a 3-dimensional stereoscopic scene on segmental stabilization. Eight subjects participated in static sway and locomotion experiments with a visual scene that moved sinusoidally or at constant velocity about the pitch or roll axes. Segmental displacements, Fast Fourier Transforms, and Root Mean Square values were calculated. In both pitch and roll, subjects exhibited greater magnitudes of motion in head and trunk than ankle. Smaller amplitudes and frequent phase reversals suggested control of the ankle by segmental proprioceptive inputs and ground reaction forces rather than by the visual-vestibular signals. Postural controllers may set limits of motion at each body segment rather than be governed solely by a perception of the visual vertical. Two locomotor strategies were also exhibited, implying that some subjects could override the effect of the roll axis optic flow field. Our results demonstrate task dependent differences that argue against using static postural responses to moving visual fields when assessing more dynamic tasks.

  2. The 14th Annual Conference on Manual Control. [digital simulation of human operator dynamics

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Human operator dynamics during actual manual control or while monitoring the automatic control systems involved in air-to-air tracking, automobile driving, the operator of undersea vehicles, and remote handling are examined. Optimal control models and the use of mathematical theory in representing man behavior in complex man machine system tasks are discussed with emphasis on eye/head tracking and scanning; perception and attention allocation; decision making; and motion simulation and effects.

  3. Neck Proprioception Shapes Body Orientation and Perception of Motion

    PubMed Central

    Pettorossi, Vito Enrico; Schieppati, Marco

    2014-01-01

    This review article deals with some effects of neck muscle proprioception on human balance, gait trajectory, subjective straight-ahead (SSA), and self-motion perception. These effects are easily observed during neck muscle vibration, a strong stimulus for the spindle primary afferent fibers. We first remind the early findings on human balance, gait trajectory, SSA, induced by limb, and neck muscle vibration. Then, more recent findings on self-motion perception of vestibular origin are described. The use of a vestibular asymmetric yaw-rotation stimulus for emphasizing the proprioceptive modulation of motion perception from the neck is mentioned. In addition, an attempt has been made to conjointly discuss the effects of unilateral neck proprioception on motion perception, SSA, and walking trajectory. Neck vibration also induces persistent aftereffects on the SSA and on self-motion perception of vestibular origin. These perceptive effects depend on intensity, duration, side of the conditioning vibratory stimulation, and on muscle status. These effects can be maintained for hours when prolonged high-frequency vibration is superimposed on muscle contraction. Overall, this brief outline emphasizes the contribution of neck muscle inflow to the construction and fine-tuning of perception of body orientation and motion. Furthermore, it indicates that tonic neck-proprioceptive input may induce persistent influences on the subject’s mental representation of space. These plastic changes might adapt motion sensitiveness to lasting or permanent head positional or motor changes. PMID:25414660

  4. Neck proprioception shapes body orientation and perception of motion.

    PubMed

    Pettorossi, Vito Enrico; Schieppati, Marco

    2014-01-01

    This review article deals with some effects of neck muscle proprioception on human balance, gait trajectory, subjective straight-ahead (SSA), and self-motion perception. These effects are easily observed during neck muscle vibration, a strong stimulus for the spindle primary afferent fibers. We first remind the early findings on human balance, gait trajectory, SSA, induced by limb, and neck muscle vibration. Then, more recent findings on self-motion perception of vestibular origin are described. The use of a vestibular asymmetric yaw-rotation stimulus for emphasizing the proprioceptive modulation of motion perception from the neck is mentioned. In addition, an attempt has been made to conjointly discuss the effects of unilateral neck proprioception on motion perception, SSA, and walking trajectory. Neck vibration also induces persistent aftereffects on the SSA and on self-motion perception of vestibular origin. These perceptive effects depend on intensity, duration, side of the conditioning vibratory stimulation, and on muscle status. These effects can be maintained for hours when prolonged high-frequency vibration is superimposed on muscle contraction. Overall, this brief outline emphasizes the contribution of neck muscle inflow to the construction and fine-tuning of perception of body orientation and motion. Furthermore, it indicates that tonic neck-proprioceptive input may induce persistent influences on the subject's mental representation of space. These plastic changes might adapt motion sensitiveness to lasting or permanent head positional or motor changes.

  5. Spectral fingerprints of large-scale cortical dynamics during ambiguous motion perception.

    PubMed

    Helfrich, Randolph F; Knepper, Hannah; Nolte, Guido; Sengelmann, Malte; König, Peter; Schneider, Till R; Engel, Andreas K

    2016-11-01

    Ambiguous stimuli have been widely used to study the neuronal correlates of consciousness. Recently, it has been suggested that conscious perception might arise from the dynamic interplay of functionally specialized but widely distributed cortical areas. While previous research mainly focused on phase coupling as a correlate of cortical communication, more recent findings indicated that additional coupling modes might coexist and possibly subserve distinct cortical functions. Here, we studied two coupling modes, namely phase and envelope coupling, which might differ in their origins, putative functions and dynamics. Therefore, we recorded 128-channel EEG while participants performed a bistable motion task and utilized state-of-the-art source-space connectivity analysis techniques to study the functional relevance of different coupling modes for cortical communication. Our results indicate that gamma-band phase coupling in extrastriate visual cortex might mediate the integration of visual tokens into a moving stimulus during ambiguous visual stimulation. Furthermore, our results suggest that long-range fronto-occipital gamma-band envelope coupling sustains the horizontal percept during ambiguous motion perception. Additionally, our results support the idea that local parieto-occipital alpha-band phase coupling controls the inter-hemispheric information transfer. These findings provide correlative evidence for the notion that synchronized oscillatory brain activity reflects the processing of sensory input as well as the information integration across several spatiotemporal scales. The results indicate that distinct coupling modes are involved in different cortical computations and that the rich spatiotemporal correlation structure of the brain might constitute the functional architecture for cortical processing and specific multi-site communication. Hum Brain Mapp 37:4099-4111, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. Video quality assessment method motivated by human visual perception

    NASA Astrophysics Data System (ADS)

    He, Meiling; Jiang, Gangyi; Yu, Mei; Song, Yang; Peng, Zongju; Shao, Feng

    2016-11-01

    Research on video quality assessment (VQA) plays a crucial role in improving the efficiency of video coding and the performance of video processing. It is well acknowledged that the motion energy model generates motion energy responses in a middle temporal area by simulating the receptive field of neurons in V1 for the motion perception of the human visual system. Motivated by the biological evidence for the visual motion perception, a VQA method is proposed in this paper, which comprises the motion perception quality index and the spatial index. To be more specific, the motion energy model is applied to evaluate the temporal distortion severity of each frequency component generated from the difference of Gaussian filter bank, which produces the motion perception quality index, and the gradient similarity measure is used to evaluate the spatial distortion of the video sequence to get the spatial quality index. The experimental results of the LIVE, CSIQ, and IVP video databases demonstrate that the random forests regression technique trained by the generated quality indices is highly correspondent to human visual perception and has many significant improvements than comparable well-performing methods. The proposed method has higher consistency with subjective perception and higher generalization capability.

  7. Audiovisual associations alter the perception of low-level visual motion

    PubMed Central

    Kafaligonul, Hulusi; Oluk, Can

    2015-01-01

    Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role. PMID:25873869

  8. Perceptual training yields rapid improvements in visually impaired youth.

    PubMed

    Nyquist, Jeffrey B; Lappin, Joseph S; Zhang, Ruyuan; Tadin, Duje

    2016-11-30

    Visual function demands coordinated responses to information over a wide field of view, involving both central and peripheral vision. Visually impaired individuals often seem to underutilize peripheral vision, even in absence of obvious peripheral deficits. Motivated by perceptual training studies with typically sighted adults, we examined the effectiveness of perceptual training in improving peripheral perception of visually impaired youth. Here, we evaluated the effectiveness of three training regimens: (1) an action video game, (2) a psychophysical task that combined attentional tracking with a spatially and temporally unpredictable motion discrimination task, and (3) a control video game. Training with both the action video game and modified attentional tracking yielded improvements in visual performance. Training effects were generally larger in the far periphery and appear to be stable 12 months after training. These results indicate that peripheral perception might be under-utilized by visually impaired youth and that this underutilization can be improved with only ~8 hours of perceptual training. Moreover, the similarity of improvements following attentional tracking and action video-game training suggest that well-documented effects of action video-game training might be due to the sustained deployment of attention to multiple dynamic targets while concurrently requiring rapid attending and perception of unpredictable events.

  9. Perceptual Distortions in Pitch and Time Reveal Active Prediction and Support for an Auditory Pitch-Motion Hypothesis

    PubMed Central

    Henry, Molly J.; McAuley, J. Devin

    2013-01-01

    A number of accounts of human auditory perception assume that listeners use prior stimulus context to generate predictions about future stimulation. Here, we tested an auditory pitch-motion hypothesis that was developed from this perspective. Listeners judged either the time change (i.e., duration) or pitch change of a comparison frequency glide relative to a standard (referent) glide. Under a constant-velocity assumption, listeners were hypothesized to use the pitch velocity (Δf/Δt) of the standard glide to generate predictions about the pitch velocity of the comparison glide, leading to perceptual distortions along the to-be-judged dimension when the velocities of the two glides differed. These predictions were borne out in the pattern of relative points of subjective equality by a significant three-way interaction between the velocities of the two glides and task. In general, listeners’ judgments along the task-relevant dimension (pitch or time) were affected by expectations generated by the constant-velocity standard, but in an opposite manner for the two stimulus dimensions. When the comparison glide velocity was faster than the standard, listeners overestimated time change, but underestimated pitch change, whereas when the comparison glide velocity was slower than the standard, listeners underestimated time change, but overestimated pitch change. Perceptual distortions were least evident when the velocities of the standard and comparison glides were matched. Fits of an imputed velocity model further revealed increasingly larger distortions at faster velocities. The present findings provide support for the auditory pitch-motion hypothesis and add to a larger body of work revealing a role for active prediction in human auditory perception. PMID:23936462

  10. Perceptual distortions in pitch and time reveal active prediction and support for an auditory pitch-motion hypothesis.

    PubMed

    Henry, Molly J; McAuley, J Devin

    2013-01-01

    A number of accounts of human auditory perception assume that listeners use prior stimulus context to generate predictions about future stimulation. Here, we tested an auditory pitch-motion hypothesis that was developed from this perspective. Listeners judged either the time change (i.e., duration) or pitch change of a comparison frequency glide relative to a standard (referent) glide. Under a constant-velocity assumption, listeners were hypothesized to use the pitch velocity (Δf/Δt) of the standard glide to generate predictions about the pitch velocity of the comparison glide, leading to perceptual distortions along the to-be-judged dimension when the velocities of the two glides differed. These predictions were borne out in the pattern of relative points of subjective equality by a significant three-way interaction between the velocities of the two glides and task. In general, listeners' judgments along the task-relevant dimension (pitch or time) were affected by expectations generated by the constant-velocity standard, but in an opposite manner for the two stimulus dimensions. When the comparison glide velocity was faster than the standard, listeners overestimated time change, but underestimated pitch change, whereas when the comparison glide velocity was slower than the standard, listeners underestimated time change, but overestimated pitch change. Perceptual distortions were least evident when the velocities of the standard and comparison glides were matched. Fits of an imputed velocity model further revealed increasingly larger distortions at faster velocities. The present findings provide support for the auditory pitch-motion hypothesis and add to a larger body of work revealing a role for active prediction in human auditory perception.

  11. Expansion of direction space around the cardinal axes revealed by smooth pursuit eye movements.

    PubMed

    Krukowski, Anton E; Stone, Leland S

    2005-01-20

    It is well established that perceptual direction discrimination shows an oblique effect; thresholds are higher for motion along diagonal directions than for motion along cardinal directions. Here, we compare simultaneous direction judgments and pursuit responses for the same motion stimuli and find that both pursuit and perceptual thresholds show similar anisotropies. The pursuit oblique effect is robust under a wide range of experimental manipulations, being largely resistant to changes in trajectory (radial versus tangential motion), speed (10 versus 25 deg/s), directional uncertainty (blocked versus randomly interleaved), and cognitive state (tracking alone versus concurrent tracking and perceptual tasks). Our data show that the pursuit oblique effect is caused by an effective expansion of direction space surrounding the cardinal directions and the requisite compression of space for other directions. This expansion suggests that the directions around the cardinal directions are in some way overrepresented in the visual cortical pathways that drive both smooth pursuit and perception.

  12. Expansion of direction space around the cardinal axes revealed by smooth pursuit eye movements

    NASA Technical Reports Server (NTRS)

    Krukowski, Anton E.; Stone, Leland S.

    2005-01-01

    It is well established that perceptual direction discrimination shows an oblique effect; thresholds are higher for motion along diagonal directions than for motion along cardinal directions. Here, we compare simultaneous direction judgments and pursuit responses for the same motion stimuli and find that both pursuit and perceptual thresholds show similar anisotropies. The pursuit oblique effect is robust under a wide range of experimental manipulations, being largely resistant to changes in trajectory (radial versus tangential motion), speed (10 versus 25 deg/s), directional uncertainty (blocked versus randomly interleaved), and cognitive state (tracking alone versus concurrent tracking and perceptual tasks). Our data show that the pursuit oblique effect is caused by an effective expansion of direction space surrounding the cardinal directions and the requisite compression of space for other directions. This expansion suggests that the directions around the cardinal directions are in some way overrepresented in the visual cortical pathways that drive both smooth pursuit and perception.

  13. Gravity matters: Motion perceptions modified by direction and body position.

    PubMed

    Claassen, Jens; Bardins, Stanislavs; Spiegel, Rainer; Strupp, Michael; Kalla, Roger

    2016-07-01

    Motion coherence thresholds are consistently higher at lower velocities. In this study we analysed the influence of the position and direction of moving objects on their perception and thereby the influence of gravity. This paradigm allows a differentiation to be made between coherent and randomly moving objects in an upright and a reclining position with a horizontal or vertical axis of motion. 18 young healthy participants were examined in this coherent threshold paradigm. Motion coherence thresholds were significantly lower when position and motion were congruent with gravity independent of motion velocity (p=0.024). In the other conditions higher motion coherence thresholds (MCT) were found at lower velocities and vice versa (p<0.001). This result confirms previous studies with higher MCT at lower velocity but is in contrast to studies concerning perception of virtual turns and optokinetic nystagmus, in which differences of perception were due to different directions irrespective of body position, i.e. perception took place in an egocentric reference frame. Since the observed differences occurred in an upright position only, perception of coherent motion in this study is defined by an earth-centered reference frame rather than by an ego-centric frame. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Spatial Alignment and Response Hand in Geometric and Motion Illusions

    PubMed Central

    Scocchia, Lisa; Paroli, Michela; Stucchi, Natale A.; Sedda, Anna

    2017-01-01

    Perception of visual illusions is susceptible to manipulation of their spatial properties. Further, illusions can sometimes affect visually guided actions, especially the movement planning phase. Remarkably, visual properties of objects related to actions, such as affordances, can prime more accurate perceptual judgements. In spite of the amount of knowledge available on affordances and on the influence of illusions on actions (or lack of thereof), virtually nothing is known about the reverse: the influence of action-related parameters on the perception of visual illusions. Here, we tested a hypothesis that the response mode (that can be linked to action-relevant features) can affect perception of the Poggendorff (geometric) and of the Vanishing Point (motion) illusion. We explored the role of hand dominance (right dominant versus left non-dominant hand) and its interaction with stimulus spatial alignment (i.e., congruency between visual stimulus and the hand used for responses). Seventeen right-handed participants performed our tasks with their right and left hands, and the stimuli were presented in regular and mirror-reversed views. It turned out that the regular version of the Poggendorff display generates a stronger illusion compared to the mirror version, and that participants are less accurate and show more variability when they use their left hand in responding to the Vanishing Point. In summary, our results show that there is a marginal effect of hand precision in motion related illusions, which is absent for geometrical illusions. In the latter, attentional anisometry seems to play a greater role in generating the illusory effect. Taken together, our findings suggest that changes in the response mode (here: manual action-related parameters) do not necessarily affect illusion perception. Therefore, although intuitively speaking there should be at least unidirectional effects of perception on action, and possible interactions between the two systems, this simple study still suggests their relative independence, except for the case when the less skilled (non-dominant) hand and arguably more deliberate responses are used. PMID:28769830

  15. Dynamical evolution of motion perception.

    PubMed

    Kanai, Ryota; Sheth, Bhavin R; Shimojo, Shinsuke

    2007-03-01

    Motion is defined as a sequence of positional changes over time. However, in perception, spatial position and motion dynamically interact with each other. This reciprocal interaction suggests that the perception of a moving object itself may dynamically evolve following the onset of motion. Here, we show evidence that the percept of a moving object systematically changes over time. In experiments, we introduced a transient gap in the motion sequence or a brief change in some feature (e.g., color or shape) of an otherwise smoothly moving target stimulus. Observers were highly sensitive to the gap or transient change if it occurred soon after motion onset (< or =200 ms), but significantly less so if it occurred later (> or = 300 ms). Our findings suggest that the moving stimulus is initially perceived as a time series of discrete potentially isolatable frames; later failures to perceive change suggests that over time, the stimulus begins to be perceived as a single, indivisible gestalt integrated over space as well as time, which could well be the signature of an emergent stable motion percept.

  16. Global motion perception is related to motor function in 4.5-year-old children born at risk of abnormal development

    PubMed Central

    Chakraborty, Arijit; Anstice, Nicola S.; Jacobs, Robert J.; Paudel, Nabin; LaGasse, Linda L.; Lester, Barry M.; McKinlay, Christopher J. D.; Harding, Jane E.; Wouldes, Trecia A.; Thompson, Benjamin

    2017-01-01

    Global motion perception is often used as an index of dorsal visual stream function in neurodevelopmental studies. However, the relationship between global motion perception and visuomotor control, a primary function of the dorsal stream, is unclear. We measured global motion perception (motion coherence threshold; MCT) and performance on standardized measures of motor function in 606 4.5-year-old children born at risk of abnormal neurodevelopment. Visual acuity, stereoacuity and verbal IQ were also assessed. After adjustment for verbal IQ or both visual acuity and stereoacuity, MCT was modestly, but significantly, associated with all components of motor function with the exception of gross motor scores. In a separate analysis, stereoacuity, but not visual acuity, was significantly associated with both gross and fine motor scores. These results indicate that the development of motion perception and stereoacuity are associated with motor function in pre-school children. PMID:28435122

  17. Smelling directions: Olfaction modulates ambiguous visual motion perception

    PubMed Central

    Kuang, Shenbing; Zhang, Tao

    2014-01-01

    Senses of smells are often accompanied by simultaneous visual sensations. Previous studies have documented enhanced olfactory performance with concurrent presence of congruent color- or shape- related visual cues, and facilitated visual object perception when congruent smells are simultaneously present. These visual object-olfaction interactions suggest the existences of couplings between the olfactory pathway and the visual ventral processing stream. However, it is not known if olfaction can modulate visual motion perception, a function that is related to the visual dorsal stream. We tested this possibility by examining the influence of olfactory cues on the perceptions of ambiguous visual motion signals. We showed that, after introducing an association between motion directions and olfactory cues, olfaction could indeed bias ambiguous visual motion perceptions. Our result that olfaction modulates visual motion processing adds to the current knowledge of cross-modal interactions and implies a possible functional linkage between the olfactory system and the visual dorsal pathway. PMID:25052162

  18. Seeing the world topsy-turvy: The primary role of kinematics in biological motion inversion effects.

    PubMed

    Fitzgerald, Sue-Anne; Brooks, Anna; van der Zwan, Rick; Blair, Duncan

    2014-01-01

    Physical inversion of whole or partial human body representations typically has catastrophic consequences on the observer's ability to perform visual processing tasks. Explanations usually focus on the effects of inversion on the visual system's ability to exploit configural or structural relationships, but more recently have also implicated motion or kinematic cue processing. Here, we systematically tested the role of both on perceptions of sex from upright and inverted point-light walkers. Our data suggest that inversion results in systematic degradations of the processing of kinematic cues. Specifically and intriguingly, they reveal sex-based kinematic differences: Kinematics characteristic of females generally are resistant to inversion effects, while those of males drive systematic sex misperceptions. Implications of the findings are discussed.

  19. The development of a test methodology for the evaluation of EVA gloves

    NASA Technical Reports Server (NTRS)

    O'Hara, John M.; Cleland, John; Winfield, Dan

    1988-01-01

    This paper describes the development of a standardized set of tests designed to assess EVA-gloved hand capabilities in six measurement domains: range of motion, strength, tactile perception, dexterity, fatigue, and comfort. Based upon an assessment of general human-hand functioning and EVA task requirements, several tests within each measurement domain were developed to provide a comprehensive evaluation. All tests were designed to be conducted in a glove box with the bare hand as a baseline and the EVA glove at operating pressure.

  20. The perception of ego-motion change in environments with varying depth: Interaction of stereo and optic flow.

    PubMed

    Ott, Florian; Pohl, Ladina; Halfmann, Marc; Hardiess, Gregor; Mallot, Hanspeter A

    2016-07-01

    When estimating ego-motion in environments (e.g., tunnels, streets) with varying depth, human subjects confuse ego-acceleration with environment narrowing and ego-deceleration with environment widening. Festl, Recktenwald, Yuan, and Mallot (2012) demonstrated that in nonstereoscopic viewing conditions, this happens despite the fact that retinal measurements of acceleration rate-a variable related to tau-dot-should allow veridical perception. Here we address the question of whether additional depth cues (specifically binocular stereo, object occlusion, or constant average object size) help break the confusion between narrowing and acceleration. Using a forced-choice paradigm, the confusion is shown to persist even if unambiguous stereo information is provided. The confusion can also be demonstrated in an adjustment task in which subjects were asked to keep a constant speed in a tunnel with varying diameter: Subjects increased speed in widening sections and decreased speed in narrowing sections even though stereoscopic depth information was provided. If object-based depth information (stereo, occlusion, constant average object size) is added, the confusion between narrowing and acceleration still remains but may be slightly reduced. All experiments are consistent with a simple matched filter algorithm for ego-motion detection, neglecting both parallactic and stereoscopic depth information, but leave open the possibility of cue combination at a later stage.

  1. Self-organizing neural integration of pose-motion features for human action recognition

    PubMed Central

    Parisi, German I.; Weber, Cornelius; Wermter, Stefan

    2015-01-01

    The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented toward human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR) networks that obtain progressively generalized representations of sensory inputs and learn inherent spatio-temporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best results for a public benchmark of domestic daily actions. PMID:26106323

  2. Two Simon tasks with different sources of conflict: an ERP study of motion- and location-based compatibility effects.

    PubMed

    Galashan, Daniela; Wittfoth, Matthias; Fehr, Thorsten; Herrmann, Manfred

    2008-07-01

    Behavioral and electrophysiological correlates of two Simon tasks were examined using comparable stimuli but different task-irrelevant and conflict-inducing stimulus features. Whereas target shape was always the task-relevant stimulus attribute, either target location (location-based task) or motion direction within the target stimuli (motion-based task) was used as a source of conflict. Data from ten healthy participants who performed both tasks are presented. In the motion-based task the incompatible condition showed smaller P300 amplitudes at Pz than the compatible condition and the location-based task yielded a trend towards a reduced P300 amplitude in the incompatible condition. For both tasks, no P300 latency differences between the conditions were found at Pz. The results suggest that the motion-based task elicits behavioral and electrophysiological effects comparable with regular Simon tasks. As all stimuli in the motion-based Simon task were presented centrally the present data strongly argue against the attention-shifting account as an explanatory approach.

  3. Self-motion perception: assessment by real-time computer-generated animations

    NASA Technical Reports Server (NTRS)

    Parker, D. E.; Phillips, J. O.

    2001-01-01

    We report a new procedure for assessing complex self-motion perception. In three experiments, subjects manipulated a 6 degree-of-freedom magnetic-field tracker which controlled the motion of a virtual avatar so that its motion corresponded to the subjects' perceived self-motion. The real-time animation created by this procedure was stored using a virtual video recorder for subsequent analysis. Combined real and illusory self-motion and vestibulo-ocular reflex eye movements were evoked by cross-coupled angular accelerations produced by roll and pitch head movements during passive yaw rotation in a chair. Contrary to previous reports, illusory self-motion did not correspond to expectations based on semicircular canal stimulation. Illusory pitch head-motion directions were as predicted for only 37% of trials; whereas, slow-phase eye movements were in the predicted direction for 98% of the trials. The real-time computer-generated animations procedure permits use of naive, untrained subjects who lack a vocabulary for reporting motion perception and is applicable to basic self-motion perception studies, evaluation of motion simulators, assessment of balance disorders and so on.

  4. A model for the pilot's use of motion cues in roll-axis tracking tasks

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Junker, A. M.

    1977-01-01

    Simulated target-following and disturbance-regulation tasks were explored with subjects using visual-only and combined visual and motion cues. The effects of motion cues on task performance and pilot response behavior were appreciably different for the two task configurations and were consistent with data reported in earlier studies for similar task configurations. The optimal-control model for pilot/vehicle systems provided a task-independent framework for accounting for the pilot's use of motion cues. Specifically, the availability of motion cues was modeled by augmenting the set of perceptual variables to include position, rate, acceleration, and accleration-rate of the motion simulator, and results were consistent with the hypothesis of attention-sharing between visual and motion variables. This straightforward informational model allowed accurate model predictions of the effects of motion cues on a variety of response measures for both the target-following and disturbance-regulation tasks.

  5. Sustained acceleration on perception of relative position and motion.

    PubMed

    McKinley, R Andrew; Tripp, Lloyd D; Fullerton, Kathy L; Goodyear, Chuck

    2013-03-01

    Air-to-air refueling, formation flying, and projectile countermeasures all rely on a pilot's ability to be aware of his position and motion relative to another object. Eight subjects participated in the study, all members of the sustained acceleration stress panel at Wright-Patterson AFB, OH. The task consisted of the subject performing a two-dimensional join up task between a KC-135 tanker and an F-16. The objective was to guide the nose of the F-16 to the posterior end of the boom extended from the tanker, and hold this position for 2 s. If the F-16 went past the tanker, or misaligned with the tanker, it would be recorded as an error. These tasks were performed during four G(z) acceleration profiles starting from a baseline acceleration of 1.5 G(z). The plateaus were 3, 5, and 7 G(z). The final acceleration exposure was a simulated aerial combat maneuver (SACM). One subject was an outlier and therefore omitted from analysis. The mean capture time and percent error data were recorded and compared separately. There was a significant difference in error percentage change from baseline among the G(z) profiles, but not capture time. Mean errors were approximately 15% higher in the 7 G profile and 10% higher during the SACM. This experiment suggests that the ability to accurately perceive the motion of objects relative to other objects is impeded at acceleration levels of 7 G(z) or higher.

  6. Use of cues in virtual reality depends on visual feedback.

    PubMed

    Fulvio, Jacqueline M; Rokers, Bas

    2017-11-22

    3D motion perception is of central importance to daily life. However, when tested in laboratory settings, sensitivity to 3D motion signals is found to be poor, leading to the view that heuristics and prior assumptions are critical for 3D motion perception. Here we explore an alternative: sensitivity to 3D motion signals is context-dependent and must be learned based on explicit visual feedback in novel environments. The need for action-contingent visual feedback is well-established in the developmental literature. For example, young kittens that are passively moved through an environment, but unable to move through it themselves, fail to develop accurate depth perception. We find that these principles also obtain in adult human perception. Observers that do not experience visual consequences of their actions fail to develop accurate 3D motion perception in a virtual reality environment, even after prolonged exposure. By contrast, observers that experience the consequences of their actions improve performance based on available sensory cues to 3D motion. Specifically, we find that observers learn to exploit the small motion parallax cues provided by head jitter. Our findings advance understanding of human 3D motion processing and form a foundation for future study of perception in virtual and natural 3D environments.

  7. Effect of motion cues during complex curved approach and landing tasks: A piloted simulation study

    NASA Technical Reports Server (NTRS)

    Scanlon, Charles H.

    1987-01-01

    A piloted simulation study was conducted to examine the effect of motion cues using a high fidelity simulation of commercial aircraft during the performance of complex approach and landing tasks in the Microwave Landing System (MLS) signal environment. The data from these tests indicate that in a high complexity MLS approach task with moderate turbulence and wind, the pilot uses motion cues to improve path tracking performance. No significant differences in tracking accuracy were noted for the low and medium complexity tasks, regardless of the presence of motion cues. Higher control input rates were measured for all tasks when motion was used. Pilot eye scan, as measured by instrument dwell time, was faster when motion cues were used regardless of the complexity of the approach tasks. Pilot comments indicated a preference for motion. With motion cues, pilots appeared to work harder in all levels of task complexity and to improve tracking performance in the most complex approach task.

  8. Global Motion Perception in 2-Year-Old Children: A Method for Psychophysical Assessment and Relationships With Clinical Measures of Visual Function

    PubMed Central

    Yu, Tzu-Ying; Jacobs, Robert J.; Anstice, Nicola S.; Paudel, Nabin; Harding, Jane E.; Thompson, Benjamin

    2013-01-01

    Purpose. We developed and validated a technique for measuring global motion perception in 2-year-old children, and assessed the relationship between global motion perception and other measures of visual function. Methods. Random dot kinematogram (RDK) stimuli were used to measure motion coherence thresholds in 366 children at risk of neurodevelopmental problems at 24 ± 1 months of age. RDKs of variable coherence were presented and eye movements were analyzed offline to grade the direction of the optokinetic reflex (OKR) for each trial. Motion coherence thresholds were calculated by fitting psychometric functions to the resulting datasets. Test–retest reliability was assessed in 15 children, and motion coherence thresholds were measured in a group of 10 adults using OKR and behavioral responses. Standard age-appropriate optometric tests also were performed. Results. Motion coherence thresholds were measured successfully in 336 (91.8%) children using the OKR technique, but only 31 (8.5%) using behavioral responses. The mean threshold was 41.7 ± 13.5% for 2-year-old children and 3.3 ± 1.2% for adults. Within-assessor reliability and test–retest reliability were high in children. Children's motion coherence thresholds were significantly correlated with stereoacuity (LANG I & II test, ρ = 0.29, P < 0.001; Frisby, ρ = 0.17, P = 0.022), but not with binocular visual acuity (ρ = 0.11, P = 0.07). In adults OKR and behavioral motion coherence thresholds were highly correlated (intraclass correlation = 0.81, P = 0.001). Conclusions. Global motion perception can be measured in 2-year-old children using the OKR. This technique is reliable and data from adults suggest that motion coherence thresholds based on the OKR are related to motion perception. Global motion perception was related to stereoacuity in children. PMID:24282224

  9. Improving Sensorimotor Function Using Stochastic Vestibular Stimulation

    NASA Technical Reports Server (NTRS)

    Galvan, R. C.; Clark, T. K.; Merfeld, D. M.; Bloomberg, J. J.; Mulavara, A. P.; Oman, C. M.

    2014-01-01

    Astronauts experience sensorimotor changes during spaceflight, particularly during G-transition phases. Post flight sensorimotor changes may include postural and gait instability, spatial disorientation, and visual performance decrements, all of which can degrade operational capabilities of the astronauts and endanger the crew. Crewmember safety would be improved if these detrimental effects of spaceflight could be mitigated by a sensorimotor countermeasure and even further if adaptation to baseline could be facilitated. The goal of this research is to investigate the potential use of stochastic vestibular stimulation (SVS) as a technology to improve sensorimotor function. We hypothesize that low levels of SVS will improve sensorimotor performance through stochastic resonance (SR). The SR phenomenon occurs when the response of a nonlinear system to a weak input signal is optimized by the application of a particular nonzero level of noise. Two studies have been initiated to investigate the beneficial effects and potential practical usage of SVS. In both studies, electrical vestibular stimulation is applied via electrodes on the mastoid processes using a constant current stimulator. The first study aims to determine the repeatability of the effect of vestibular stimulation on sensorimotor performance and perception in order to better understand the practical use of SVS. The beneficial effect of low levels of SVS on balance performance has been shown in the past. This research uses the same balance task repeated multiple times within a day and across days to study the repeatability of the stimulation effects. The balance test consists of 50 sec trials in which the subject stands with his or her feet together, arms crossed, and eyes closed on compliant foam. Varying levels of SVS, ranging from 0-700 micro A, are applied across different trials. The subject-specific optimal SVS level is that which results in the best balance performance as measured by inertial measurement units placed on the upper and lower torso of the subjects. Additionally, each individual’s threshold for illusory motion perception of suprasensory electrical vestibular stimulation is measured multiple times within and across days to better understand how multiple SVS test methods compare. The second study aims to demonstrate stochastic resonance in the vestibular system using a perception based motion recognition task. This task measures an individual’s velocity threshold of motion recognition using a 6-degree of freedom Stewart platform and a 3-down/1-up staircase procedure. For this study, thresholds are determined using 150 trials in the upright, head-centered roll tilt motion direction at a 0.2 Hz frequency. We aim to demonstrate the characteristic bell shaped curve associated with stochastic resonance with each subject’s motion recognition thresholds at varying SVS levels ranging from 0 to 1500 micro A. The curve includes the individual’s baseline threshold with no SVS, optimal or minimal threshold at some mid-level of SVS, and finally degraded or increased threshold at a high SVS level. An additional aim is to formally retest each subject at his or her individual optimal SVS level on a different day than the original testing for additional validity. The overall purpose of this research is to further quantify the effects of SVS on various sensorimotor tasks and investigate the practical implications of its use in the context of human space flight so that it may be implemented in the future as a component of a comprehensive countermeasure plan for adaptation to G-transitions.

  10. Amplifying the helicopter drift in a conformal HMD

    NASA Astrophysics Data System (ADS)

    Schmerwitz, Sven; Knabl, Patrizia M.; Lueken, Thomas; Doehler, Hans-Ullrich

    2016-05-01

    Helicopter operations require a well-controlled and minimal lateral drift shortly before ground contact. Any lateral speed exceeding this small threshold can cause a dangerous momentum around the roll axis, which may cause a total roll over of the helicopter. As long as pilots can observe visual cues from the ground, they are able to easily control the helicopter drift. But whenever natural vision is reduced or even obscured, e.g. due to night, fog, or dust, this controllability diminishes. Therefore helicopter operators could benefit from some type of "drift indication" that mitigates the influence of a degraded visual environment. Generally humans derive ego motion by the perceived environmental object flow. The visual cues perceived are located close to the helicopter, therefore even small movements can be recognized. This fact was used to investigate a modified drift indication. To enhance the perception of ego motion in a conformal HMD symbol set the measured movement was used to generate a pattern motion in the forward field of view close or on the landing pad. The paper will discuss the method of amplified ego motion drift indication. Aspects concerning impact factors like visualization type, location, gain and more will be addressed. Further conclusions from previous studies, a high fidelity experiment and a part task experiment, will be provided. A part task study will be presented that compared different amplified drift indications against a predictor. 24 participants, 15 holding a fixed wing license and 4 helicopter pilots, had to perform a dual task on a virtual reality headset. A simplified control model was used to steer a "helicopter" down to a landing pad while acknowledging randomly placed characters.

  11. Spatial task performance, sex differences, and motion sickness susceptibility.

    PubMed

    Levine, Max E; Stern, Robert M

    2002-10-01

    There are substantial individual differences in susceptibility to motion sickness, yet little is known about what mediates these differences. Spatial ability and sex have been suggested as possible factors in this relationship. 89 participants (57 women) were administered a Motion Sickness Questionnaire that assesses motion sickness susceptibility, a Water-level Task that gauges sensitivity to gravitational upright, and a Mental Rotation Task that tests an individual's awareness of how objects typically move in space. Significant sex differences were observed in performance of both the Water-level Task (p<.01), and the Mental Rotation Task (p<.005), with women performing less accurately than men. Women also had significantly higher scores on the Motion Sickness Questionnaire (p<.005). Among men, but not women, significant negative relationships were observed between Water-level Task performance and Motion Sickness Questionnaire score (p<.001) and between Mental Rotation Task performance and Motion Sickness Questionnaire score (p<.005). In conclusion, women performed significantly more poorly than men did on the spatial ability tasks and reported significantly more bouts of motion sickness. In addition, men showed a significant negative relationship between spatial ability and motion sickness susceptibility.

  12. Modulation frequency as a cue for auditory speed perception.

    PubMed

    Senna, Irene; Parise, Cesare V; Ernst, Marc O

    2017-07-12

    Unlike vision, the mechanisms underlying auditory motion perception are poorly understood. Here we describe an auditory motion illusion revealing a novel cue to auditory speed perception: the temporal frequency of amplitude modulation (AM-frequency), typical for rattling sounds. Naturally, corrugated objects sliding across each other generate rattling sounds whose AM-frequency tends to directly correlate with speed. We found that AM-frequency modulates auditory speed perception in a highly systematic fashion: moving sounds with higher AM-frequency are perceived as moving faster than sounds with lower AM-frequency. Even more interestingly, sounds with higher AM-frequency also induce stronger motion aftereffects. This reveals the existence of specialized neural mechanisms for auditory motion perception, which are sensitive to AM-frequency. Thus, in spatial hearing, the brain successfully capitalizes on the AM-frequency of rattling sounds to estimate the speed of moving objects. This tightly parallels previous findings in motion vision, where spatio-temporal frequency of moving displays systematically affects both speed perception and the magnitude of the motion aftereffects. Such an analogy with vision suggests that motion detection may rely on canonical computations, with similar neural mechanisms shared across the different modalities. © 2017 The Author(s).

  13. Dissociation of first- and second-order motion systems by perceptual learning

    PubMed Central

    Chubb, Charles

    2013-01-01

    Previous studies investigating transfer of perceptual learning between luminance-defined (LD) motion and texture-contrast-defined (CD) motion tasks have found little or no transfer from LD to CD motion tasks but nearly perfect transfer from CD to LD motion tasks. Here, we introduce a paradigm that yields a clean double dissociation: LD training yields no transfer to the CD task, but more interestingly, CD training yields no transfer to the LD task. Participants were trained in two variants of a global motion task. In one (LD) variant, motion was defined by tokens that differed from the background in mean luminance. In the other (CD) variant, motion was defined by tokens that had mean luminance equal to the background but differed from the background in texture contrast. The task was to judge whether the signal tokens were moving to the right or to the left. Task difficulty was varied by manipulating the proportion of tokens that moved coherently across the four frames of the stimulus display. Performance in each of the LD and CD variants of the task was measured as training proceeded. In each task, training produced substantial improvement in performance in the trained task; however, in neither case did this improvement show any significant transfer to the nontrained task. PMID:22477056

  14. Model Predictive Control Based Motion Drive Algorithm for a Driving Simulator

    NASA Astrophysics Data System (ADS)

    Rehmatullah, Faizan

    In this research, we develop a model predictive control based motion drive algorithm for the driving simulator at Toronto Rehabilitation Institute. Motion drive algorithms exploit the limitations of the human vestibular system to formulate a perception of motion within the constrained workspace of a simulator. In the absence of visual cues, the human perception system is unable to distinguish between acceleration and the force of gravity. The motion drive algorithm determines control inputs to displace the simulator platform, and by using the resulting inertial forces and angular rates, creates the perception of motion. By using model predictive control, we can optimize the use of simulator workspace for every maneuver while simulating the vehicle perception. With the ability to handle nonlinear constraints, the model predictive control allows us to incorporate workspace limitations.

  15. The upper spatial limit for perception of displacement is affected by preceding motion.

    PubMed

    Stefanova, Miroslava; Mateeff, Stefan; Hohnsbein, Joachim

    2009-03-01

    The upper spatial limit D(max) for perception of apparent motion of a random dot pattern may be strongly affected by another, collinear, motion that precedes it [Mateeff, S., Stefanova, M., &. Hohnsbein, J. (2007). Perceived global direction of a compound of real and apparent motion. Vision Research, 47, 1455-1463]. In the present study this phenomenon was studied with two-dimensional motion stimuli. A random dot pattern moved alternately in the vertical and oblique direction (zig-zag motion). The vertical motion was of 1.04 degrees length; it was produced by three discrete spatial steps of the dots. Thereafter the dots were displaced by a single spatial step in oblique direction. Each motion lasted for 57ms. The upper spatial limit for perception of the oblique motion was measured under two conditions: the vertical component of the oblique motion and the vertical motion were either in the same or in opposite directions. It was found that the perception of the oblique motion was strongly influenced by the relative direction of the vertical motion that preceded it; in the "same" condition the upper spatial limit was much shorter than in the "opposite" condition. Decreasing the speed of the vertical motion reversed this effect. Interpretations based on networks of motion detectors and on Gestalt theory are discussed.

  16. IQ Predicts Biological Motion Perception in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Rutherford, M. D.; Troje, Nikolaus F.

    2012-01-01

    Biological motion is easily perceived by neurotypical observers when encoded in point-light displays. Some but not all relevant research shows significant deficits in biological motion perception among those with ASD, especially with respect to emotional displays. We tested adults with and without ASD on the perception of masked biological motion…

  17. [Aging affects early stage direction selectivity of MT cells in rhesus monkeys].

    PubMed

    Liang, Zhen; Chen, Yue-Ming; Meng, Xue; Wang, Yi; Zhou, Bao-Zhuo; Xie, Ying-Ying; He, Wen-Sheng

    2012-10-01

    The middle temporal area (MT/V5) plays an important role in motion processing. Neurons in this area have a strongly selective response to the moving direction of objects and as such, the selectivity of MT neurons was proposed to be a neural mechanism for the perception of motion. Our previous studies have found degradation in direction selectivity of MT neurons in old monkeys, but this direction selectivity was calculated during the whole response time and the results were not able to uncover the mechanism of motion perception over a time course. Furthermore, experiments have found that direction selectivity was enhanced by attention at a later stage. Therefore, the response should be excluded in experiments with anesthesia. To further characterize the neural mechanism over a time course, we investigated the age-related changes of direction selectivity in the early stage by comparing the proportions of direction selective MT cells in old and young macaque monkeys using in vivo single-cell recording techniques. Our results show that the proportion of early-stage-direction-selective cells is lower in old monkeys than in young monkeys, and that the early stage direction bias (esDB) of old MT cells decreased relative to young MT cells. Furthermore, the proportion of MT cells having strong early stage direction selectivity in old monkeys was decreased. Accordingly, the functional degradation in the early stage of MT cells may mediate perceptual declines of old primates in visual motion tasks.

  18. Contrasting accounts of direction and shape perception in short-range motion: Counterchange compared with motion energy detection.

    PubMed

    Norman, Joseph; Hock, Howard; Schöner, Gregor

    2014-07-01

    It has long been thought (e.g., Cavanagh & Mather, 1989) that first-order motion-energy extraction via space-time comparator-type models (e.g., the elaborated Reichardt detector) is sufficient to account for human performance in the short-range motion paradigm (Braddick, 1974), including the perception of reverse-phi motion when the luminance polarity of the visual elements is inverted during successive frames. Human observers' ability to discriminate motion direction and use coherent motion information to segregate a region of a random cinematogram and determine its shape was tested; they performed better in the same-, as compared with the inverted-, polarity condition. Computational analyses of short-range motion perception based on the elaborated Reichardt motion energy detector (van Santen & Sperling, 1985) predict, incorrectly, that symmetrical results will be obtained for the same- and inverted-polarity conditions. In contrast, the counterchange detector (Hock, Schöner, & Gilroy, 2009) predicts an asymmetry quite similar to that of human observers in both motion direction and shape discrimination. The further advantage of counterchange, as compared with motion energy, detection for the perception of spatial shape- and depth-from-motion is discussed.

  19. Neural mechanisms underlying sound-induced visual motion perception: An fMRI study.

    PubMed

    Hidaka, Souta; Higuchi, Satomi; Teramoto, Wataru; Sugita, Yoichi

    2017-07-01

    Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Translational Vestibulo-Ocular Reflex and Motion Perception During Interaural Linear Acceleration: Comparison of Different Motion Paradigms

    NASA Technical Reports Server (NTRS)

    Beaton, K. H.; Holly, J. E.; Clement, G. R.; Wood, S. J.

    2011-01-01

    The neural mechanisms to resolve ambiguous tilt-translation motion have been hypothesized to be different for motion perception and eye movements. Previous studies have demonstrated differences in ocular and perceptual responses using a variety of motion paradigms, including Off-Vertical Axis Rotation (OVAR), Variable Radius Centrifugation (VRC), translation along a linear track, and tilt about an Earth-horizontal axis. While the linear acceleration across these motion paradigms is presumably equivalent, there are important differences in semicircular canal cues. The purpose of this study was to compare translation motion perception and horizontal slow phase velocity to quantify consistencies, or lack thereof, across four different motion paradigms. Twelve healthy subjects were exposed to sinusoidal interaural linear acceleration between 0.01 and 0.6 Hz at 1.7 m/s/s (equivalent to 10 tilt) using OVAR, VRC, roll tilt, and lateral translation. During each trial, subjects verbally reported the amount of perceived peak-to-peak lateral translation and indicated the direction of motion with a joystick. Binocular eye movements were recorded using video-oculography. In general, the gain of translation perception (ratio of reported linear displacement to equivalent linear stimulus displacement) increased with stimulus frequency, while the phase did not significantly vary. However, translation perception was more pronounced during both VRC and lateral translation involving actual translation, whereas perceptions were less consistent and more variable during OVAR and roll tilt which did not involve actual translation. For each motion paradigm, horizontal eye movements were negligible at low frequencies and showed phase lead relative to the linear stimulus. At higher frequencies, the gain of the eye movements increased and became more inphase with the acceleration stimulus. While these results are consistent with the hypothesis that the neural computational strategies for motion perception and eye movements differ, they also indicate that the specific motion platform employed can have a significant effect on both the amplitude and phase of each.

  1. Sleep-dependent consolidation benefits fast transfer of time interval training.

    PubMed

    Chen, Lihan; Guo, Lu; Bao, Ming

    2017-03-01

    Previous study has shown that short training (15 min) for explicitly discriminating temporal intervals between two paired auditory beeps, or between two paired tactile taps, can significantly improve observers' ability to classify the perceptual states of visual Ternus apparent motion while the training of task-irrelevant sensory properties did not help to improve visual timing (Chen and Zhou in Exp Brain Res 232(6):1855-1864, 2014). The present study examined the role of 'consolidation' after training of temporal task-irrelevant properties, or whether a pure delay (i.e., blank consolidation) following pretest of the target task would give rise to improved ability of visual interval timing, typified in visual Ternus display. A procedure of pretest-training-posttest was adopted, with the probe of discriminating Ternus apparent motion. The extended implicit training of timing in which the time intervals between paired auditory beeps or paired tactile taps were manipulated but the task was discrimination of the auditory pitches or tactile intensities, did not lead to the training benefits (Exps 1 and 3); however, a delay of 24 h after implicit training of timing, including solving 'Sudoku puzzles,' made the otherwise absent training benefits observable (Exps 2, 4, 5 and 6). The above improvements in performance were not due to a practice effect of Ternus motion (Exp 7). A general 'blank' consolidation period of 24 h also made improvements of visual timing observable (Exp 8). Taken together, the current findings indicated that sleep-dependent consolidation imposed a general effect, by potentially triggering and maintaining neuroplastic changes in the intrinsic (timing) network to enhance the ability of time perception.

  2. Global motion perception is associated with motor function in 2-year-old children.

    PubMed

    Thompson, Benjamin; McKinlay, Christopher J D; Chakraborty, Arijit; Anstice, Nicola S; Jacobs, Robert J; Paudel, Nabin; Yu, Tzu-Ying; Ansell, Judith M; Wouldes, Trecia A; Harding, Jane E

    2017-09-29

    The dorsal visual processing stream that includes V1, motion sensitive area V5 and the posterior parietal lobe, supports visually guided motor function. Two recent studies have reported associations between global motion perception, a behavioural measure of processing in V5, and motor function in pre-school and school aged children. This indicates a relationship between visual and motor development and also supports the use of global motion perception to assess overall dorsal stream function in studies of human neurodevelopment. We investigated whether associations between vision and motor function were present at 2 years of age, a substantially earlier stage of development. The Bayley III test of Infant and Toddler Development and measures of vision including visual acuity (Cardiff Acuity Cards), stereopsis (Lang stereotest) and global motion perception were attempted in 404 2-year-old children (±4 weeks). Global motion perception (quantified as a motion coherence threshold) was assessed by observing optokinetic nystagmus in response to random dot kinematograms of varying coherence. Linear regression revealed that global motion perception was modestly, but statistically significantly associated with Bayley III composite motor (r 2 =0.06, P<0.001, n=375) and gross motor scores (r 2 =0.06, p<0.001, n=375). The associations remained significant when language score was included in the regression model. In addition, when language score was included in the model, stereopsis was significantly associated with composite motor and fine motor scores, but unaided visual acuity was not statistically significantly associated with any of the motor scores. These results demonstrate that global motion perception and binocular vision are associated with motor function at an early stage of development. Global motion perception can be used as a partial measure of dorsal stream function from early childhood. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Coherent modulation of stimulus colour can affect visually induced self-motion perception.

    PubMed

    Nakamura, Shinji; Seno, Takeharu; Ito, Hiroyuki; Sunaga, Shoji

    2010-01-01

    The effects of dynamic colour modulation on vection were investigated to examine whether perceived variation of illumination affects self-motion perception. Participants observed expanding optic flow which simulated their forward self-motion. Onset latency, accumulated duration, and estimated magnitude of the self-motion were measured as indices of vection strength. Colour of the dots in the visual stimulus was modulated between white and red (experiment 1), white and grey (experiment 2), and grey and red (experiment 3). The results indicated that coherent colour oscillation in the visual stimulus significantly suppressed the strength of vection, whereas incoherent or static colour modulation did not affect vection. There was no effect of the types of the colour modulation; both achromatic and chromatic modulations turned out to be effective in inhibiting self-motion perception. Moreover, in a situation where the simulated direction of a spotlight was manipulated dynamically, vection strength was also suppressed (experiment 4). These results suggest that observer's perception of illumination is critical for self-motion perception, and rapid variation of perceived illumination would impair the reliabilities of visual information in determining self-motion.

  4. Contextual effects on motion perception and smooth pursuit eye movements.

    PubMed

    Spering, Miriam; Gegenfurtner, Karl R

    2008-08-15

    Smooth pursuit eye movements are continuous, slow rotations of the eyes that allow us to follow the motion of a visual object of interest. These movements are closely related to sensory inputs from the visual motion processing system. To track a moving object in the natural environment, its motion first has to be segregated from the motion signals provided by surrounding stimuli. Here, we review experiments on the effect of the visual context on motion processing with a focus on the relationship between motion perception and smooth pursuit eye movements. While perception and pursuit are closely linked, we show that they can behave quite distinctly when required by the visual context.

  5. Color and luminance in the perception of 1- and 2-dimensional motion.

    PubMed

    Farell, B

    1999-08-01

    An isoluminant color grating usually appears to move more slowly than a luminance grating that has the same physical speed. Yet a grating defined by both color and luminance is seen as perceptually unified and moving at a single intermediate speed. In experiments measuring perceived speed and direction, it was found that color- and luminance-based motion signals are combined differently in the perception of 1-D motion than they are in the perception of 2-D motion. Adding color to a moving 1-D luminance pattern, a grating, slows its perceived speed. Adding color to a moving 2-D luminance pattern, a plaid made of orthogonal gratings, leaves its perceived speed unchanged. Analogous results occur for the perception of the direction of 2-D motion. The visual system appears to discount color when analyzing the motion of luminance-bearing 2-D patterns. This strategy has adaptive advantages, making the sensing of object motion more veridical without sacrificing the ability to see motion at isoluminance.

  6. Global motion perception is related to motor function in 4.5-year-old children born at risk of abnormal development.

    PubMed

    Chakraborty, Arijit; Anstice, Nicola S; Jacobs, Robert J; Paudel, Nabin; LaGasse, Linda L; Lester, Barry M; McKinlay, Christopher J D; Harding, Jane E; Wouldes, Trecia A; Thompson, Benjamin

    2017-06-01

    Global motion perception is often used as an index of dorsal visual stream function in neurodevelopmental studies. However, the relationship between global motion perception and visuomotor control, a primary function of the dorsal stream, is unclear. We measured global motion perception (motion coherence threshold; MCT) and performance on standardized measures of motor function in 606 4.5-year-old children born at risk of abnormal neurodevelopment. Visual acuity, stereoacuity and verbal IQ were also assessed. After adjustment for verbal IQ or both visual acuity and stereoacuity, MCT was modestly, but significantly, associated with all components of motor function with the exception of fine motor scores. In a separate analysis, stereoacuity, but not visual acuity, was significantly associated with both gross and fine motor scores. These results indicate that the development of motion perception and stereoacuity are associated with motor function in pre-school children. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Lesions to right posterior parietal cortex impair visual depth perception from disparity but not motion cues

    PubMed Central

    Leopold, David A.; Humphreys, Glyn W.; Welchman, Andrew E.

    2016-01-01

    The posterior parietal cortex (PPC) is understood to be active when observers perceive three-dimensional (3D) structure. However, it is not clear how central this activity is in the construction of 3D spatial representations. Here, we examine whether PPC is essential for two aspects of visual depth perception by testing patients with lesions affecting this region. First, we measured subjects' ability to discriminate depth structure in various 3D surfaces and objects using binocular disparity. Patients with lesions to right PPC (N = 3) exhibited marked perceptual deficits on these tasks, whereas those with left hemisphere lesions (N = 2) were able to reliably discriminate depth as accurately as control subjects. Second, we presented an ambiguous 3D stimulus defined by structure from motion to determine whether PPC lesions influence the rate of bistable perceptual alternations. Patients' percept durations for the 3D stimulus were generally within a normal range, although the two patients with bilateral PPC lesions showed the fastest perceptual alternation rates in our sample. Intermittent stimulus presentation reduced the reversal rate similarly across subjects. Together, the results suggest that PPC plays a causal role in both inferring and maintaining the perception of 3D structure with stereopsis supported primarily by the right hemisphere, but do not lend support to the view that PPC is a critical contributor to bistable perceptual alternations. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269606

  8. Neural network architecture for form and motion perception (Abstract Only)

    NASA Astrophysics Data System (ADS)

    Grossberg, Stephen

    1991-08-01

    Evidence is given for a new neural network theory of biological motion perception, a motion boundary contour system. This theory clarifies why parallel streams V1 yields V2 and V1 yields MT exist for static form and motion form processing among the areas V1, V2, and MT of visual cortex. The motion boundary contour system consists of several parallel copies, such that each copy is activated by a different range of receptive field sizes. Each copy is further subdivided into two hierarchically organized subsystems: a motion oriented contrast (MOC) filter, for preprocessing moving images; and a cooperative-competitive feedback (CC) loop, for generating emergent boundary segmentations of the filtered signals. The present work uses the MOC filter to explain a variety of classical and recent data about short-range and long- range apparent motion percepts that have not yet been explained by alternative models. These data include split motion; reverse-contrast gamma motion; delta motion; visual inertia; group motion in response to a reverse-contrast Ternus display at short interstimulus intervals; speed- up of motion velocity as interflash distance increases or flash duration decreases; dependence of the transition from element motion to group motion on stimulus duration and size; various classical dependencies between flash duration, spatial separation, interstimulus interval, and motion threshold known as Korte''s Laws; and dependence of motion strength on stimulus orientation and spatial frequency. These results supplement earlier explanations by the model of apparent motion data that other models have not explained; a recent proposed solution of the global aperture problem including explanations of motion capture and induced motion; an explanation of how parallel cortical systems for static form perception and motion form perception may develop, including a demonstration that these parallel systems are variations on a common cortical design; an explanation of why the geometries of static form and motion form differ, in particular why opposite orientations differ by 90 degree(s), whereas opposite directions differ by 180 degree(s), and why a cortical stream V1 yields V2 yields MT is needed; and a summary of how the main properties of other motion perception models can be assimilated into different parts of the motion boundary contour system design.

  9. Objective Motion Cueing Criteria Investigation Based on Three Flight Tasks

    NASA Technical Reports Server (NTRS)

    Zaal, Petrus M. T.; Schroeder, Jeffery A.; Chung, William W.

    2015-01-01

    This paper intends to help establish fidelity criteria to accompany the simulator motion system diagnostic test specified by the International Civil Aviation Organization. Twelve air- line transport pilots flew three tasks in the NASA Vertical Motion Simulator under four different motion conditions. The experiment used three different hexapod motion configurations, each with a different tradeoff between motion filter gain and break frequency, and one large motion configuration that utilized as much of the simulator's motion space as possible. The motion condition significantly affected: 1) pilot motion fidelity ratings, and sink rate and lateral deviation at touchdown for the approach and landing task, 2) pilot motion fidelity ratings, roll deviations, maximum pitch rate, and number of stick shaker activations in the stall task, and 3) heading deviation after an engine failure in the takeoff task. Significant differences in pilot-vehicle performance were used to define initial objective motion cueing criteria boundaries. These initial fidelity boundaries show promise but need refinement.

  10. Prolonged asymmetric vestibular stimulation induces opposite, long-term effects on self-motion perception and ocular responses.

    PubMed

    Pettorossi, V E; Panichi, R; Botti, F M; Kyriakareli, A; Ferraresi, A; Faralli, M; Schieppati, M; Bronstein, A M

    2013-04-01

    Self-motion perception and the vestibulo-ocular reflex (VOR) were investigated in healthy subjects during asymmetric whole body yaw plane oscillations while standing on a platform in the dark. Platform oscillation consisted of two half-sinusoidal cycles of the same amplitude (40°) but different duration, featuring a fast (FHC) and a slow half-cycle (SHC). Rotation consisted of four or 20 consecutive cycles to probe adaptation further with the longer duration protocol. Self-motion perception was estimated by subjects tracking with a pointer the remembered position of an earth-fixed visual target. VOR was measured by electro-oculography. The asymmetric stimulation pattern consistently induced a progressive increase of asymmetry in motion perception, whereby the gain of the tracking response gradually increased during FHCs and decreased during SHCs. The effect was observed already during the first few cycles and further increased during 20 cycles, leading to a totally distorted location of the initial straight-ahead. In contrast, after some initial interindividual variability, the gain of the slow phase VOR became symmetric, decreasing for FHCs and increasing for SHCs. These oppositely directed adaptive effects in motion perception and VOR persisted for nearly an hour. Control conditions using prolonged but symmetrical stimuli produced no adaptive effects on either motion perception or VOR. These findings show that prolonged asymmetric activation of the vestibular system leads to opposite patterns of adaptation of self-motion perception and VOR. The results provide strong evidence that semicircular canal inputs are processed centrally by independent mechanisms for perception of body motion and eye movement control. These divergent adaptation mechanisms enhance awareness of movement toward the faster body rotation, while improving the eye stabilizing properties of the VOR.

  11. Prolonged asymmetric vestibular stimulation induces opposite, long-term effects on self-motion perception and ocular responses

    PubMed Central

    Pettorossi, V E; Panichi, R; Botti, F M; Kyriakareli, A; Ferraresi, A; Faralli, M; Schieppati, M; Bronstein, A M

    2013-01-01

    Self-motion perception and the vestibulo-ocular reflex (VOR) were investigated in healthy subjects during asymmetric whole body yaw plane oscillations while standing on a platform in the dark. Platform oscillation consisted of two half-sinusoidal cycles of the same amplitude (40°) but different duration, featuring a fast (FHC) and a slow half-cycle (SHC). Rotation consisted of four or 20 consecutive cycles to probe adaptation further with the longer duration protocol. Self-motion perception was estimated by subjects tracking with a pointer the remembered position of an earth-fixed visual target. VOR was measured by electro-oculography. The asymmetric stimulation pattern consistently induced a progressive increase of asymmetry in motion perception, whereby the gain of the tracking response gradually increased during FHCs and decreased during SHCs. The effect was observed already during the first few cycles and further increased during 20 cycles, leading to a totally distorted location of the initial straight-ahead. In contrast, after some initial interindividual variability, the gain of the slow phase VOR became symmetric, decreasing for FHCs and increasing for SHCs. These oppositely directed adaptive effects in motion perception and VOR persisted for nearly an hour. Control conditions using prolonged but symmetrical stimuli produced no adaptive effects on either motion perception or VOR. These findings show that prolonged asymmetric activation of the vestibular system leads to opposite patterns of adaptation of self-motion perception and VOR. The results provide strong evidence that semicircular canal inputs are processed centrally by independent mechanisms for perception of body motion and eye movement control. These divergent adaptation mechanisms enhance awareness of movement toward the faster body rotation, while improving the eye stabilizing properties of the VOR. PMID:23318876

  12. Criterion-free measurement of motion transparency perception at different speeds

    PubMed Central

    Rocchi, Francesca; Ledgeway, Timothy; Webb, Ben S.

    2018-01-01

    Transparency perception often occurs when objects within the visual scene partially occlude each other or move at the same time, at different velocities across the same spatial region. Although transparent motion perception has been extensively studied, we still do not understand how the distribution of velocities within a visual scene contribute to transparent perception. Here we use a novel psychophysical procedure to characterize the distribution of velocities in a scene that give rise to transparent motion perception. To prevent participants from adopting a subjective decision criterion when discriminating transparent motion, we used an “odd-one-out,” three-alternative forced-choice procedure. Two intervals contained the standard—a random-dot-kinematogram with dot speeds or directions sampled from a uniform distribution. The other interval contained the comparison—speeds or directions sampled from a distribution with the same range as the standard, but with a notch of different widths removed. Our results suggest that transparent motion perception is driven primarily by relatively slow speeds, and does not emerge when only very fast speeds are present within a visual scene. Transparent perception of moving surfaces is modulated by stimulus-based characteristics, such as the separation between the means of the overlapping distributions or the range of speeds presented within an image. Our work illustrates the utility of using objective, forced-choice methods to reveal the mechanisms underlying motion transparency perception. PMID:29614154

  13. Adapting Social Neuroscience Measures for Schizophrenia Clinical Trials, Part 3: Fathoming External Validity

    PubMed Central

    Olbert, Charles M.

    2013-01-01

    It is unknown whether measures adapted from social neuroscience linked to specific neural systems will demonstrate relationships to external variables. Four paradigms adapted from social neuroscience were administered to 173 clinically stable outpatients with schizophrenia to determine their relationships to functionally meaningful variables and to investigate their incremental validity beyond standard measures of social and nonsocial cognition. The 4 paradigms included 2 that assess perception of nonverbal social and action cues (basic biological motion and emotion in biological motion) and 2 that involve higher level inferences about self and others’ mental states (self- referential memory and empathic accuracy). Overall, social neuroscience paradigms showed significant relationships to functional capacity but weak relationships to community functioning; the paradigms also showed weak correlations to clinical symptoms. Evidence for incremental validity beyond standard measures of social and nonsocial cognition was mixed with additional predictive power shown for functional capacity but not community functioning. Of the newly adapted paradigms, the empathic accuracy task had the broadest external validity. These results underscore the difficulty of translating developments from neuroscience into clinically useful tasks with functional significance. PMID:24072806

  14. Adapting social neuroscience measures for schizophrenia clinical trials, part 3: fathoming external validity.

    PubMed

    Olbert, Charles M; Penn, David L; Kern, Robert S; Lee, Junghee; Horan, William P; Reise, Steven P; Ochsner, Kevin N; Marder, Stephen R; Green, Michael F

    2013-11-01

    It is unknown whether measures adapted from social neuroscience linked to specific neural systems will demonstrate relationships to external variables. Four paradigms adapted from social neuroscience were administered to 173 clinically stable outpatients with schizophrenia to determine their relationships to functionally meaningful variables and to investigate their incremental validity beyond standard measures of social and nonsocial cognition. The 4 paradigms included 2 that assess perception of nonverbal social and action cues (basic biological motion and emotion in biological motion) and 2 that involve higher level inferences about self and others' mental states (self-referential memory and empathic accuracy). Overall, social neuroscience paradigms showed significant relationships to functional capacity but weak relationships to community functioning; the paradigms also showed weak correlations to clinical symptoms. Evidence for incremental validity beyond standard measures of social and nonsocial cognition was mixed with additional predictive power shown for functional capacity but not community functioning. Of the newly adapted paradigms, the empathic accuracy task had the broadest external validity. These results underscore the difficulty of translating developments from neuroscience into clinically useful tasks with functional significance.

  15. A slowly moving foreground can capture an observer's self-motion--a report of a new motion illusion: inverted vection.

    PubMed

    Nakamura, S; Shimojo, S

    2000-01-01

    We investigated interactions between foreground and background stimuli during visually induced perception of self-motion (vection) by using a stimulus composed of orthogonally moving random-dot patterns. The results indicated that, when the foreground moves with a slower speed, a self-motion sensation with a component in the same direction as the foreground is induced. We named this novel component of self-motion perception 'inverted vection'. The robustness of inverted vection was confirmed using various measures of self-motion sensation and under different stimulus conditions. The mechanism underlying inverted vection is discussed with regard to potentially relevant factors, such as relative motion between the foreground and background, and the interaction between the mis-registration of eye-movement information and self-motion perception.

  16. Perception of Visual Speed While Moving

    ERIC Educational Resources Information Center

    Durgin, Frank H.; Gigone, Krista; Scott, Rebecca

    2005-01-01

    During self-motion, the world normally appears stationary. In part, this may be due to reductions in visual motion signals during self-motion. In 8 experiments, the authors used magnitude estimation to characterize changes in visual speed perception as a result of biomechanical self-motion alone (treadmill walking), physical translation alone…

  17. Similar effects of feature-based attention on motion perception and pursuit eye movements at different levels of awareness

    PubMed Central

    Spering, Miriam; Carrasco, Marisa

    2012-01-01

    Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth pursuit eye movements in response to moving dichoptic plaids–stimuli composed of two orthogonally-drifting gratings, presented separately to each eye–in human observers. Monocular adaptation to one grating prior to the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating’s motion direction or to both (neutral condition). We show that observers were better in detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating’s motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted towards the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it. PMID:22649238

  18. Similar effects of feature-based attention on motion perception and pursuit eye movements at different levels of awareness.

    PubMed

    Spering, Miriam; Carrasco, Marisa

    2012-05-30

    Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth-pursuit eye movements in response to moving dichoptic plaids--stimuli composed of two orthogonally drifting gratings, presented separately to each eye--in human observers. Monocular adaptation to one grating before the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating's motion direction or to both (neutral condition). We show that observers were better at detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating's motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted toward the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it.

  19. Perceptual training yields rapid improvements in visually impaired youth

    PubMed Central

    Nyquist, Jeffrey B.; Lappin, Joseph S.; Zhang, Ruyuan; Tadin, Duje

    2016-01-01

    Visual function demands coordinated responses to information over a wide field of view, involving both central and peripheral vision. Visually impaired individuals often seem to underutilize peripheral vision, even in absence of obvious peripheral deficits. Motivated by perceptual training studies with typically sighted adults, we examined the effectiveness of perceptual training in improving peripheral perception of visually impaired youth. Here, we evaluated the effectiveness of three training regimens: (1) an action video game, (2) a psychophysical task that combined attentional tracking with a spatially and temporally unpredictable motion discrimination task, and (3) a control video game. Training with both the action video game and modified attentional tracking yielded improvements in visual performance. Training effects were generally larger in the far periphery and appear to be stable 12 months after training. These results indicate that peripheral perception might be under-utilized by visually impaired youth and that this underutilization can be improved with only ~8 hours of perceptual training. Moreover, the similarity of improvements following attentional tracking and action video-game training suggest that well-documented effects of action video-game training might be due to the sustained deployment of attention to multiple dynamic targets while concurrently requiring rapid attending and perception of unpredictable events. PMID:27901026

  20. Seeing the world topsy-turvy: The primary role of kinematics in biological motion inversion effects

    PubMed Central

    Fitzgerald, Sue-Anne; Brooks, Anna; van der Zwan, Rick; Blair, Duncan

    2014-01-01

    Physical inversion of whole or partial human body representations typically has catastrophic consequences on the observer's ability to perform visual processing tasks. Explanations usually focus on the effects of inversion on the visual system's ability to exploit configural or structural relationships, but more recently have also implicated motion or kinematic cue processing. Here, we systematically tested the role of both on perceptions of sex from upright and inverted point-light walkers. Our data suggest that inversion results in systematic degradations of the processing of kinematic cues. Specifically and intriguingly, they reveal sex-based kinematic differences: Kinematics characteristic of females generally are resistant to inversion effects, while those of males drive systematic sex misperceptions. Implications of the findings are discussed. PMID:25469217

  1. Reduced orienting to audiovisual synchrony in infancy predicts autism diagnosis at 3 years of age.

    PubMed

    Falck-Ytter, Terje; Nyström, Pär; Gredebäck, Gustaf; Gliga, Teodora; Bölte, Sven

    2018-01-23

    Effective multisensory processing develops in infancy and is thought to be important for the perception of unified and multimodal objects and events. Previous research suggests impaired multisensory processing in autism, but its role in the early development of the disorder is yet uncertain. Here, using a prospective longitudinal design, we tested whether reduced visual attention to audiovisual synchrony is an infant marker of later-emerging autism diagnosis. We studied 10-month-old siblings of children with autism using an eye tracking task previously used in studies of preschoolers. The task assessed the effect of manipulations of audiovisual synchrony on viewing patterns while the infants were observing point light displays of biological motion. We analyzed the gaze data recorded in infancy according to diagnostic status at 3 years of age (DSM-5). Ten-month-old infants who later received an autism diagnosis did not orient to audiovisual synchrony expressed within biological motion. In contrast, both infants at low-risk and high-risk siblings without autism at follow-up had a strong preference for this type of information. No group differences were observed in terms of orienting to upright biological motion. This study suggests that reduced orienting to audiovisual synchrony within biological motion is an early sign of autism. The findings support the view that poor multisensory processing could be an important antecedent marker of this neurodevelopmental condition. © 2018 Association for Child and Adolescent Mental Health.

  2. Measurement of angular velocity in the perception of rotation.

    PubMed

    Barraza, José F; Grzywacz, Norberto M

    2002-09-01

    Humans are sensitive to the parameters of translational motion, namely, direction and speed. At the same time, people have special mechanisms to deal with more complex motions, such as rotations and expansions. One wonders whether people may also be sensitive to the parameters of these complex motions. Here, we report on a series of experiments that explore whether human subjects can use angular velocity to evaluate how fast a rotational motion is. In four experiments, subjects were required to perform a task of speed-of-rotation discrimination by comparing two annuli of different radii in a temporal 2AFC paradigm. Results showed that humans could rely on a sensitive measurement of angular velocity to perform this discrimination task. This was especially true when the quality of the rotational signal was high (given by the number of dots composing the annulus). When the signal quality decreased, a bias towards linear velocity of 5-80% appeared, suggesting the existence of separate mechanisms for angular and linear velocity. This bias was independent from the reference radius. Finally, we asked whether the measurement of angular velocity required a rigid rotation, that is, whether the visual system makes only one global estimate of angular velocity. For this purpose, a random-dot disk was built such that all the dots were rotating with the same tangential speed, irrespectively of radius. Results showed that subjects do not estimate a unique global angular velocity, but that they perceive a non-rigid disk, with angular velocity falling inversely proportionally with radius.

  3. Differential responses in dorsal visual cortex to motion and disparity depth cues

    PubMed Central

    Arnoldussen, David M.; Goossens, Jeroen; van den Berg, Albert V.

    2013-01-01

    We investigated how interactions between monocular motion parallax and binocular cues to depth vary in human motion areas for wide-field visual motion stimuli (110 × 100°). We used fMRI with an extensive 2 × 3 × 2 factorial blocked design in which we combined two types of self-motion (translational motion and translational + rotational motion), with three categories of motion inflicted by the degree of noise (self-motion, distorted self-motion, and multiple object-motion), and two different view modes of the flow patterns (stereo and synoptic viewing). Interactions between disparity and motion category revealed distinct contributions to self- and object-motion processing in 3D. For cortical areas V6 and CSv, but not the anterior part of MT+ with bilateral visual responsiveness (MT+/b), we found a disparity-dependent effect of rotational flow and noise: When self-motion perception was degraded by adding rotational flow and moderate levels of noise, the BOLD responses were reduced compared with translational self-motion alone, but this reduction was cancelled by adding stereo information which also rescued the subject's self-motion percept. At high noise levels, when the self-motion percept gave way to a swarm of moving objects, the BOLD signal strongly increased compared to self-motion in areas MT+/b and V6, but only for stereo in the latter. BOLD response did not increase for either view mode in CSv. These different response patterns indicate different contributions of areas V6, MT+/b, and CSv to the processing of self-motion perception and the processing of multiple independent motions. PMID:24339808

  4. Haptic, Virtual Interaction and Motor Imagery: Entertainment Tools and Psychophysiological Testing

    PubMed Central

    Invitto, Sara; Faggiano, Chiara; Sammarco, Silvia; De Luca, Valerio; De Paolis, Lucio T.

    2016-01-01

    In this work, the perception of affordances was analysed in terms of cognitive neuroscience during an interactive experience in a virtual reality environment. In particular, we chose a virtual reality scenario based on the Leap Motion controller: this sensor device captures the movements of the user’s hand and fingers, which are reproduced on a computer screen by the proper software applications. For our experiment, we employed a sample of 10 subjects matched by age and sex and chosen among university students. The subjects took part in motor imagery training and immersive affordance condition (a virtual training with Leap Motion and a haptic training with real objects). After each training sessions the subject performed a recognition task, in order to investigate event-related potential (ERP) components. The results revealed significant differences in the attentional components during the Leap Motion training. During Leap Motion session, latencies increased in the occipital lobes, which are entrusted to visual sensory; in contrast, latencies decreased in the frontal lobe, where the brain is mainly activated for attention and action planning. PMID:26999151

  5. Haptic, Virtual Interaction and Motor Imagery: Entertainment Tools and Psychophysiological Testing.

    PubMed

    Invitto, Sara; Faggiano, Chiara; Sammarco, Silvia; De Luca, Valerio; De Paolis, Lucio T

    2016-03-18

    In this work, the perception of affordances was analysed in terms of cognitive neuroscience during an interactive experience in a virtual reality environment. In particular, we chose a virtual reality scenario based on the Leap Motion controller: this sensor device captures the movements of the user's hand and fingers, which are reproduced on a computer screen by the proper software applications. For our experiment, we employed a sample of 10 subjects matched by age and sex and chosen among university students. The subjects took part in motor imagery training and immersive affordance condition (a virtual training with Leap Motion and a haptic training with real objects). After each training sessions the subject performed a recognition task, in order to investigate event-related potential (ERP) components. The results revealed significant differences in the attentional components during the Leap Motion training. During Leap Motion session, latencies increased in the occipital lobes, which are entrusted to visual sensory; in contrast, latencies decreased in the frontal lobe, where the brain is mainly activated for attention and action planning.

  6. Constrained motion model of mobile robots and its applications.

    PubMed

    Zhang, Fei; Xi, Yugeng; Lin, Zongli; Chen, Weidong

    2009-06-01

    Target detecting and dynamic coverage are fundamental tasks in mobile robotics and represent two important features of mobile robots: mobility and perceptivity. This paper establishes the constrained motion model and sensor model of a mobile robot to represent these two features and defines the k -step reachable region to describe the states that the robot may reach. We show that the calculation of the k-step reachable region can be reduced from that of 2(k) reachable regions with the fixed motion styles to k + 1 such regions and provide an algorithm for its calculation. Based on the constrained motion model and the k -step reachable region, the problems associated with target detecting and dynamic coverage are formulated and solved. For target detecting, the k-step detectable region is used to describe the area that the robot may detect, and an algorithm for detecting a target and planning the optimal path is proposed. For dynamic coverage, the k-step detected region is used to represent the area that the robot has detected during its motion, and the dynamic-coverage strategy and algorithm are proposed. Simulation results demonstrate the efficiency of the coverage algorithm in both convex and concave environments.

  7. Adaptation aftereffects in the perception of gender from biological motion.

    PubMed

    Troje, Nikolaus F; Sadr, Javid; Geyer, Henning; Nakayama, Ken

    2006-07-28

    Human visual perception is highly adaptive. While this has been known and studied for a long time in domains such as color vision, motion perception, or the processing of spatial frequency, a number of more recent studies have shown that adaptation and adaptation aftereffects also occur in high-level visual domains like shape perception and face recognition. Here, we present data that demonstrate a pronounced aftereffect in response to adaptation to the perceived gender of biological motion point-light walkers. A walker that is perceived to be ambiguous in gender under neutral adaptation appears to be male after adaptation with an exaggerated female walker and female after adaptation with an exaggerated male walker. We discuss this adaptation aftereffect as a tool to characterize and probe the mechanisms underlying biological motion perception.

  8. Coherence Motion Perception in Developmental Dyslexia: A Meta-Analysis of Behavioral Studies

    ERIC Educational Resources Information Center

    Benassi, Mariagrazia; Simonelli, Letizia; Giovagnoli, Sara; Bolzani, Roberto

    2010-01-01

    The magnitude of the association between developmental dyslexia (DD) and motion sensitivity is evaluated in 35 studies, which investigated coherence motion perception in DD. A first analysis is conducted on the differences between DD groups and age-matched control (C) groups. In a second analysis, the relationship between motion coherence…

  9. Effects of auditory information on self-motion perception during simultaneous presentation of visual shearing motion

    PubMed Central

    Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu

    2015-01-01

    Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828

  10. An Exploration of the Perception of Dance and Its Relation to Biomechanical Motion: A Systematic Review and Narrative Synthesis.

    PubMed

    Chang, Michael; Halaki, Mark; Adams, Roger; Cobley, Stephen; Lee, Kwee-Yum; O'Dwyer, Nicholas

    2016-01-01

    In dance, the goals of actions are not always clearly defined. Investigations into the perceived quality of dance actions and their relation to biomechanical motion should give insight into the performance of dance actions and their goals. The purpose of this review was to explore and document current literature concerning dance perception and its relation to the biomechanics of motion. Seven studies were included in the review. The study results showed systematic differences between expert, non-expert, and novice dancers in biomechanical and perceptual measures, both of which also varied according to the actions expressed in dance. Biomechanical and perceptual variables were found to be correlated in all the studies in the review. Significant relations were observed between kinematic variables such as amplitude, speed, and variability of movement, and perceptual measures of beauty and performance quality. However, in general, there were no clear trends in these relations. Instead, the evidence suggests that perceptual ratings of dance may be specific to both the task (the skill of the particular action) and the context (the music and staging). The results also suggest that the human perceptual system is sensitive to skillful movements and neuromuscular coordination. Since the value perceived by audiences appears to be related to dance action goals and the coordination of dance elements, practitioners could place a priority on development and execution of those factors.

  11. New human-centered linear and nonlinear motion cueing algorithms for control of simulator motion systems

    NASA Astrophysics Data System (ADS)

    Telban, Robert J.

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach are less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  12. Neural representations of kinematic laws of motion: evidence for action-perception coupling.

    PubMed

    Dayan, Eran; Casile, Antonino; Levit-Binnun, Nava; Giese, Martin A; Hendler, Talma; Flash, Tamar

    2007-12-18

    Behavioral and modeling studies have established that curved and drawing human hand movements obey the 2/3 power law, which dictates a strong coupling between movement curvature and velocity. Human motion perception seems to reflect this constraint. The functional MRI study reported here demonstrates that the brain's response to this law of motion is much stronger and more widespread than to other types of motion. Compliance with this law is reflected in the activation of a large network of brain areas subserving motor production, visual motion processing, and action observation functions. Hence, these results strongly support the notion of similar neural coding for motion perception and production. These findings suggest that cortical motion representations are optimally tuned to the kinematic and geometrical invariants characterizing biological actions.

  13. Human Motion Perception and Smooth Eye Movements Show Similar Directional Biases for Elongated Apertures

    NASA Technical Reports Server (NTRS)

    Beutter, Brent R.; Stone, Leland S.

    1997-01-01

    Although numerous studies have examined the relationship between smooth-pursuit eye movements and motion perception, it remains unresolved whether a common motion-processing system subserves both perception and pursuit. To address this question, we simultaneously recorded perceptual direction judgments and the concomitant smooth eye movement response to a plaid stimulus that we have previously shown generates systematic perceptual errors. We measured the perceptual direction biases psychophysically and the smooth eye-movement direction biases using two methods (standard averaging and oculometric analysis). We found that the perceptual and oculomotor biases were nearly identical suggesting that pursuit and perception share a critical motion processing stage, perhaps in area MT or MST of extrastriate visual cortex.

  14. Human motion perception and smooth eye movements show similar directional biases for elongated apertures

    NASA Technical Reports Server (NTRS)

    Beutter, B. R.; Stone, L. S.

    1998-01-01

    Although numerous studies have examined the relationship between smooth-pursuit eye movements and motion perception, it remains unresolved whether a common motion-processing system subserves both perception and pursuit. To address this question, we simultaneously recorded perceptual direction judgments and the concomitant smooth eye-movement response to a plaid stimulus that we have previously shown generates systematic perceptual errors. We measured the perceptual direction biases psychophysically and the smooth eye-movement direction biases using two methods (standard averaging and oculometric analysis). We found that the perceptual and oculomotor biases were nearly identical, suggesting that pursuit and perception share a critical motion processing stage, perhaps in area MT or MST of extrastriate visual cortex.

  15. Accuracy and Tuning of Flow Parsing for Visual Perception of Object Motion During Self-Motion

    PubMed Central

    Niehorster, Diederick C.

    2017-01-01

    How do we perceive object motion during self-motion using visual information alone? Previous studies have reported that the visual system can use optic flow to identify and globally subtract the retinal motion component resulting from self-motion to recover scene-relative object motion, a process called flow parsing. In this article, we developed a retinal motion nulling method to directly measure and quantify the magnitude of flow parsing (i.e., flow parsing gain) in various scenarios to examine the accuracy and tuning of flow parsing for the visual perception of object motion during self-motion. We found that flow parsing gains were below unity for all displays in all experiments; and that increasing self-motion and object motion speed did not alter flow parsing gain. We conclude that visual information alone is not sufficient for the accurate perception of scene-relative motion during self-motion. Although flow parsing performs global subtraction, its accuracy also depends on local motion information in the retinal vicinity of the moving object. Furthermore, the flow parsing gain was constant across common self-motion or object motion speeds. These results can be used to inform and validate computational models of flow parsing. PMID:28567272

  16. Asymptotically Optimal Motion Planning for Learned Tasks Using Time-Dependent Cost Maps

    PubMed Central

    Bowen, Chris; Ye, Gu; Alterovitz, Ron

    2015-01-01

    In unstructured environments in people’s homes and workspaces, robots executing a task may need to avoid obstacles while satisfying task motion constraints, e.g., keeping a plate of food level to avoid spills or properly orienting a finger to push a button. We introduce a sampling-based method for computing motion plans that are collision-free and minimize a cost metric that encodes task motion constraints. Our time-dependent cost metric, learned from a set of demonstrations, encodes features of a task’s motion that are consistent across the demonstrations and, hence, are likely required to successfully execute the task. Our sampling-based motion planner uses the learned cost metric to compute plans that simultaneously avoid obstacles and satisfy task constraints. The motion planner is asymptotically optimal and minimizes the Mahalanobis distance between the planned trajectory and the distribution of demonstrations in a feature space parameterized by the locations of task-relevant objects. The motion planner also leverages the distribution of the demonstrations to significantly reduce plan computation time. We demonstrate the method’s effectiveness and speed using a small humanoid robot performing tasks requiring both obstacle avoidance and satisfaction of learned task constraints. Note to Practitioners Motivated by the desire to enable robots to autonomously operate in cluttered home and workplace environments, this paper presents an approach for intuitively training a robot in a manner that enables it to repeat the task in novel scenarios and in the presence of unforeseen obstacles in the environment. Based on user-provided demonstrations of the task, our method learns features of the task that are consistent across the demonstrations and that we expect should be repeated by the robot when performing the task. We next present an efficient algorithm for planning robot motions to perform the task based on the learned features while avoiding obstacles. We demonstrate the effectiveness of our motion planner for scenarios requiring transferring a powder and pushing a button in environments with obstacles, and we plan to extend our results to more complex tasks in the future. PMID:26279642

  17. Visual preference for isochronic movement does not necessarily emerge from movement kinematics: a challenge for the motor simulation theory.

    PubMed

    Bidet-Ildei, Christel; Méary, David; Orliaguet, Jean-Pierre

    2008-01-17

    The aim of this experiment was to show that the visual preference for isochronic movements does not necessarily imply a motor simulation and therefore, does not depend on the kinematics of the perceived movement. To demonstrate this point, the participants' task was to adjust the velocity (the period) of a dot that depicted an elliptic motion with different perimeters (from 3 to 60 cm). The velocity profile of the movement conformed ("natural motions") or not ("unnatural motions") to the law of co-variation velocity-curvature (two-thirds power law), which is usually observed in the production of elliptic movements. For each condition, we evaluated the isochrony principle, i.e., the tendency to prefer constant durations of movement irrespective to changes in the trajectory perimeter. Our findings indicate that isochrony principle was observed whatever the kinematics of the movement (natural or unnatural). Therefore, they suggest that the perceptive preference for isochronic movements does not systematically imply a motor simulation.

  18. Visual-vestibular integration as a function of adaptation to space flight and return to Earth

    NASA Technical Reports Server (NTRS)

    Reschke, Millard R.; Bloomberg, Jacob J.; Harm, Deborah L.; Huebner, William P.; Krnavek, Jody M.; Paloski, William H.; Berthoz, Alan

    1999-01-01

    Research on perception and control of self-orientation and self-motion addresses interactions between action and perception . Self-orientation and self-motion, and the perception of that orientation and motion are required for and modified by goal-directed action. Detailed Supplementary Objective (DSO) 604 Operational Investigation-3 (OI-3) was designed to investigate the integrated coordination of head and eye movements within a structured environment where perception could modify responses and where response could be compensatory for perception. A full understanding of this coordination required definition of spatial orientation models for the microgravity environment encountered during spaceflight.

  19. Integrative cortical dysfunction and pervasive motion perception deficit in fragile X syndrome.

    PubMed

    Kogan, C S; Bertone, A; Cornish, K; Boutet, I; Der Kaloustian, V M; Andermann, E; Faubert, J; Chaudhuri, A

    2004-11-09

    Fragile X syndrome (FXS) is associated with neurologic deficits recently attributed to the magnocellular pathway of the lateral geniculate nucleus. To test the hypotheses that FXS individuals 1) have a pervasive visual motion perception impairment affecting neocortical circuits in the parietal lobe and 2) have deficits in integrative neocortical mechanisms necessary for perception of complex stimuli. Psychophysical tests of visual motion and form perception defined by either first-order (luminance) or second-order (texture) attributes were used to probe early and later occipito-temporal and occipito-parietal functioning. When compared to developmental- and age-matched controls, FXS individuals displayed severe impairments in first- and second-order motion perception. This deficit was accompanied by near normal perception for first-order form stimuli but not second-order form stimuli. Impaired visual motion processing for first- and second-order stimuli suggests that both early- and later-level neurologic function of the parietal lobe are affected in Fragile X syndrome (FXS). Furthermore, this deficit likely stems from abnormal input from the magnocellular compartment of the lateral geniculate nucleus. Impaired visual form and motion processing for complex visual stimuli with normal processing for simple (i.e., first-order) form stimuli suggests that FXS individuals have normal early form processing accompanied by a generalized impairment in neurologic mechanisms necessary for integrating all early visual input.

  20. Perceptual and cognitive effects of antipsychotics in first-episode schizophrenia: the potential impact of GABA concentration in the visual cortex.

    PubMed

    Kelemen, Oguz; Kiss, Imre; Benedek, György; Kéri, Szabolcs

    2013-12-02

    Schizophrenia is characterized by anomalous perceptual experiences (e.g., sensory irritation, inundation, and flooding) and specific alterations in visual perception. We aimed to investigate the effects of short-term antipsychotic medication on these perceptual alterations. We assessed 28 drug-naïve first episode patients with schizophrenia and 20 matched healthy controls at baseline and follow-up 8 weeks later. Contrast sensitivity was measured with steady- and pulsed-pedestal tests. Participants also received a motion coherence task, the Structured Interview for Assessing Perceptual Anomalies (SIAPA), and the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS). Proton magnetic resonance spectroscopy was used to measure gamma-aminobutyric acid (GABA) levels in the occipital cortex (GABA/total creatine [Cr] ratio). Results revealed that, comparing baseline and follow-up values, patients with schizophrenia exhibited a marked sensitivity reduction on the steady-pedestal test at low spatial frequency. Anomalous perceptual experiences were also significantly ameliorated. Antipsychotic medications had no effect on motion perception. RBANS scores showed mild improvements. At baseline, but not at follow-up, patients with schizophrenia outperformed controls on the steady-pedestal test at low spatial frequency. The dysfunction of motion perception (higher coherence threshold in patients relative to controls) was similar at both assessments. There were reduced GABA levels in schizophrenia at both assessments, which were not related to perceptual functions. These results suggest that antipsychotics dominantly affect visual contrast sensitivity and anomalous perceptual experiences. The prominent dampening effect on low spatial frequency in the steady-pedestal test might indicate the normalization of putatively overactive magnocellular retino-geniculo-cortical pathways. © 2013.

  1. Default perception of high-speed motion

    PubMed Central

    Wexler, Mark; Glennerster, Andrew; Cavanagh, Patrick; Ito, Hiroyuki; Seno, Takeharu

    2013-01-01

    When human observers are exposed to even slight motion signals followed by brief visual transients—stimuli containing no detectable coherent motion signals—they perceive large and salient illusory jumps. This visually striking effect, which we call “high phi,” challenges well-entrenched assumptions about the perception of motion, namely the minimal-motion principle and the breakdown of coherent motion perception with steps above an upper limit called dmax. Our experiments with transients, such as texture randomization or contrast reversal, show that the magnitude of the jump depends on spatial frequency and transient duration—but not on the speed of the inducing motion signals—and the direction of the jump depends on the duration of the inducer. Jump magnitude is robust across jump directions and different types of transient. In addition, when a texture is actually displaced by a large step beyond the upper step size limit of dmax, a breakdown of coherent motion perception is expected; however, in the presence of an inducer, observers again perceive coherent displacements at or just above dmax. In summary, across a large variety of stimuli, we find that when incoherent motion noise is preceded by a small bias, instead of perceiving little or no motion—as suggested by the minimal-motion principle—observers perceive jumps whose amplitude closely follows their own dmax limits. PMID:23572578

  2. A Pursuit Theory Account for the Perception of Common Motion in Motion Parallax.

    PubMed

    Ratzlaff, Michael; Nawrot, Mark

    2016-09-01

    The visual system uses an extraretinal pursuit eye movement signal to disambiguate the perception of depth from motion parallax. Visual motion in the same direction as the pursuit is perceived nearer in depth while visual motion in the opposite direction as pursuit is perceived farther in depth. This explanation of depth sign applies to either an allocentric frame of reference centered on the fixation point or an egocentric frame of reference centered on the observer. A related problem is that of depth order when two stimuli have a common direction of motion. The first psychophysical study determined whether perception of egocentric depth order is adequately explained by a model employing an allocentric framework, especially when the motion parallax stimuli have common rather than divergent motion. A second study determined whether a reversal in perceived depth order, produced by a reduction in pursuit velocity, is also explained by this model employing this allocentric framework. The results show than an allocentric model can explain both the egocentric perception of depth order with common motion and the perceptual depth order reversal created by a reduction in pursuit velocity. We conclude that an egocentric model is not the only explanation for perceived depth order in these common motion conditions. © The Author(s) 2016.

  3. Soccer athletes are superior to non-athletes at perceiving soccer-specific and non-sport specific human biological motion

    PubMed Central

    Romeas, Thomas; Faubert, Jocelyn

    2015-01-01

    Recent studies have shown that athletes’ domain specific perceptual-cognitive expertise can transfer to everyday tasks. Here we assessed the perceptual-cognitive expertise of athletes and non-athletes using sport specific and non-sport specific biological motion perception (BMP) tasks. Using a virtual environment, university-level soccer players and university students’ non-athletes were asked to perceive the direction of a point-light walker and to predict the trajectory of a masked-ball during a point-light soccer kick. Angles of presentation were varied for orientation (upright, inverted) and distance (2 m, 4 m, 16 m). Accuracy and reaction time were measured to assess observers’ performance. The results highlighted athletes’ superior ability compared to non-athletes to accurately predict the trajectory of a masked soccer ball presented at 2 m (reaction time), 4 m (accuracy and reaction time), and 16 m (accuracy) of distance. More interestingly, experts also displayed greater performance compared to non-athletes throughout the more fundamental and general point-light walker direction task presented at 2 m (reaction time), 4 m (accuracy and reaction time), and 16 m (reaction time) of distance. In addition, athletes showed a better performance throughout inverted conditions in the walker (reaction time) and soccer kick (accuracy and reaction time) tasks. This implies that during human BMP, athletes demonstrate an advantage for recognizing body kinematics that goes beyond sport specific actions. PMID:26388828

  4. Spatial Disorientation in Gondola Centrifuges Predicted by the Form of Motion as a Whole in 3-D

    PubMed Central

    Holly, Jan E.; Harmon, Katharine J.

    2009-01-01

    INTRODUCTION During a coordinated turn, subjects can misperceive tilts. Subjects accelerating in tilting-gondola centrifuges without external visual reference underestimate the roll angle, and underestimate more when backward-facing than when forward-facing. In addition, during centrifuge deceleration, the perception of pitch can include tumble while paradoxically maintaining a fixed perceived pitch angle. The goal of the present research was to test two competing hypotheses: (1) that components of motion are perceived relatively independently and then combined to form a three-dimensional perception, and (2) that perception is governed by familiarity of motions as a whole in three dimensions, with components depending more strongly on the overall shape of the motion. METHODS Published experimental data were used from existing tilting-gondola centrifuge studies. The two hypotheses were implemented formally in computer models, and centrifuge acceleration and deceleration were simulated. RESULTS The second, whole-motion oriented, hypothesis better predicted subjects' perceptions, including the forward-backward asymmetry and the paradoxical tumble upon deceleration. Important was the predominant stimulus at the beginning of the motion as well as the familiarity of centripetal acceleration. CONCLUSION Three-dimensional perception is better predicted by taking into account familiarity with the form of three-dimensional motion. PMID:19198199

  5. Less head motion during MRI under task than resting-state conditions.

    PubMed

    Huijbers, Willem; Van Dijk, Koene R A; Boenniger, Meta M; Stirnberg, Rüdiger; Breteler, Monique M B

    2017-02-15

    Head motion reduces data quality of neuroimaging data. In three functional magnetic resonance imaging (MRI) experiments we demonstrate that people make less head movements under task than resting-state conditions. In Experiment 1, we observed less head motion during a memory encoding task than during the resting-state condition. In Experiment 2, using publicly shared data from the UCLA Consortium for Neuropsychiatric Phenomics LA5c Study, we again found less head motion during several active task conditions than during a resting-state condition, although some task conditions also showed comparable motion. In the healthy controls, we found more head motion in men than in women and more motion with increasing age. When comparing clinical groups, we found that patients with a clinical diagnosis of bipolar disorder, or schizophrenia, move more compared to healthy controls or patients with ADHD. Both these experiments had a fixed acquisition order across participants, and we could not rule out that a first or last scan during a session might be particularly prone to more head motion. Therefore, we conducted Experiment 3, in which we collected several task and resting-state fMRI runs with an acquisition order counter-balanced. The results of Experiment 3 show again less head motion during several task conditions than during rest. Together these experiments demonstrate that small head motions occur during MRI even with careful instruction to remain still and fixation with foam pillows, but that head motion is lower when participants are engaged in a cognitive task. These finding may inform the choice of functional runs when studying difficult-to-scan populations, such as children or certain patient populations. Our findings also indicate that differences in head motion complicate direct comparisons of measures of functional neuronal networks between task and resting-state fMRI because of potential differences in data quality. In practice, a task to reduce head motion might be especially useful when acquiring structural MRI data such as T1/T2-weighted and diffusion MRI in research and clinical settings. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Relation of motion sickness susceptibility to vestibular and behavioral measures of orientation

    NASA Technical Reports Server (NTRS)

    Peterka, Robert J.

    1995-01-01

    The objective is to determine the relationship of motion sickness susceptibility to vestibulo-ocular reflexes (VOR), motion perception, and behavioral utilization of sensory orientation cues for the control of postural equilibrium. The work is focused on reflexes and motion perception associated with pitch and roll movements that stimulate the vertical semicircular canals and otolith organs of the inner ear. This work is relevant to the space motion sickness problem since 0 g related sensory conflicts between vertical canal and otolith motion cues are a likely cause of space motion sickness.

  7. Pictorial communication in virtual and real environments

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R. (Editor)

    1991-01-01

    Papers about the communication between human users and machines in real and synthetic environments are presented. Individual topics addressed include: pictorial communication, distortions in memory for visual displays, cartography and map displays, efficiency of graphical perception, volumetric visualization of 3D data, spatial displays to increase pilot situational awareness, teleoperation of land vehicles, computer graphics system for visualizing spacecraft in orbit, visual display aid for orbital maneuvering, multiaxis control in telemanipulation and vehicle guidance, visual enhancements in pick-and-place tasks, target axis effects under transformed visual-motor mappings, adapting to variable prismatic displacement. Also discussed are: spatial vision within egocentric and exocentric frames of reference, sensory conflict in motion sickness, interactions of form and orientation, perception of geometrical structure from congruence, prediction of three-dimensionality across continuous surfaces, effects of viewpoint in the virtual space of pictures, visual slant underestimation, spatial constraints of stereopsis in video displays, stereoscopic stance perception, paradoxical monocular stereopsis and perspective vergence. (No individual items are abstracted in this volume)

  8. Preadapting to Weightlessness

    NASA Technical Reports Server (NTRS)

    Reschke, M. F.; Parker, D. E.; Arrott, A. P.

    1986-01-01

    Report discusses physiological and physical concepts of proposed training system to precondition astronauts to weightless environment. System prevents motion sickness, often experienced during early part of orbital flight. Also helps prevent seasickness and other forms of terrestrial motion sickness, often experienced during early part of orbital flight. Training affects subject's perception of inner-ear signals, visual signals, and kinesthetic motion perception. Changed perception resembles that of astronauts who spent many days in space and adapted to weightlessness.

  9. Orientation of selective effects of body tilt on visually induced perception of self-motion.

    PubMed

    Nakamura, S; Shimojo, S

    1998-10-01

    We examined the effect of body posture upon visually induced perception of self-motion (vection) with various angles of observer's tilt. The experiment indicated that the tilted body of observer could enhance perceived strength of vertical vection, while there was no effect of body tilt on horizontal vection. This result suggests that there is an interaction between the effects of visual and vestibular information on perception of self-motion.

  10. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.

    PubMed

    Stone, Scott A; Tata, Matthew S

    2017-01-01

    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.

  11. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality

    PubMed Central

    Tata, Matthew S.

    2017-01-01

    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible. PMID:28792518

  12. Visual Benefits in Apparent Motion Displays: Automatically Driven Spatial and Temporal Anticipation Are Partially Dissociated

    PubMed Central

    Ahrens, Merle-Marie; Veniero, Domenica; Gross, Joachim; Harvey, Monika; Thut, Gregor

    2015-01-01

    Many behaviourally relevant sensory events such as motion stimuli and speech have an intrinsic spatio-temporal structure. This will engage intentional and most likely unintentional (automatic) prediction mechanisms enhancing the perception of upcoming stimuli in the event stream. Here we sought to probe the anticipatory processes that are automatically driven by rhythmic input streams in terms of their spatial and temporal components. To this end, we employed an apparent visual motion paradigm testing the effects of pre-target motion on lateralized visual target discrimination. The motion stimuli either moved towards or away from peripheral target positions (valid vs. invalid spatial motion cueing) at a rhythmic or arrhythmic pace (valid vs. invalid temporal motion cueing). Crucially, we emphasized automatic motion-induced anticipatory processes by rendering the motion stimuli non-predictive of upcoming target position (by design) and task-irrelevant (by instruction), and by creating instead endogenous (orthogonal) expectations using symbolic cueing. Our data revealed that the apparent motion cues automatically engaged both spatial and temporal anticipatory processes, but that these processes were dissociated. We further found evidence for lateralisation of anticipatory temporal but not spatial processes. This indicates that distinct mechanisms may drive automatic spatial and temporal extrapolation of upcoming events from rhythmic event streams. This contrasts with previous findings that instead suggest an interaction between spatial and temporal attention processes when endogenously driven. Our results further highlight the need for isolating intentional from unintentional processes for better understanding the various anticipatory mechanisms engaged in processing behaviourally relevant stimuli with predictable spatio-temporal structure such as motion and speech. PMID:26623650

  13. The fate of task-irrelevant visual motion: perceptual load versus feature-based attention.

    PubMed

    Taya, Shuichiro; Adams, Wendy J; Graf, Erich W; Lavie, Nilli

    2009-11-18

    We tested contrasting predictions derived from perceptual load theory and from recent feature-based selection accounts. Observers viewed moving, colored stimuli and performed low or high load tasks associated with one stimulus feature, either color or motion. The resultant motion aftereffect (MAE) was used to evaluate attentional allocation. We found that task-irrelevant visual features received less attention than co-localized task-relevant features of the same objects. Moreover, when color and motion features were co-localized yet perceived to belong to two distinct surfaces, feature-based selection was further increased at the expense of object-based co-selection. Load theory predicts that the MAE for task-irrelevant motion would be reduced with a higher load color task. However, this was not seen for co-localized features; perceptual load only modulated the MAE for task-irrelevant motion when this was spatially separated from the attended color location. Our results suggest that perceptual load effects are mediated by spatial selection and do not generalize to the feature domain. Feature-based selection operates to suppress processing of task-irrelevant, co-localized features, irrespective of perceptual load.

  14. Curvilinear approach to an intersection and visual detection of a collision.

    PubMed

    Berthelon, C; Mestre, D

    1993-09-01

    Visual motion perception plays a fundamental role in vehicle control. Recent studies have shown that the pattern of optical flow resulting from the observer's self-motion through a stable environment is used by the observer to accurately control his or her movements. However, little is known about the perception of another vehicle during self-motion--for instance, when a car driver approaches an intersection with traffic. In a series of experiments using visual simulations of car driving, we show that observers are able to detect the presence of a moving object during self-motion. However, the perception of the other car's trajectory appears to be strongly dependent on environmental factors, such as the presence of a road sign near the intersection or the shape of the road. These results suggest that local and global visual factors determine the perception of a car's trajectory during self-motion.

  15. Perceptual Training Strongly Improves Visual Motion Perception in Schizophrenia

    ERIC Educational Resources Information Center

    Norton, Daniel J.; McBain, Ryan K.; Ongur, Dost; Chen, Yue

    2011-01-01

    Schizophrenia patients exhibit perceptual and cognitive deficits, including in visual motion processing. Given that cognitive systems depend upon perceptual inputs, improving patients' perceptual abilities may be an effective means of cognitive intervention. In healthy people, motion perception can be enhanced through perceptual learning, but it…

  16. Accuracy of System Step Response Roll Magnitude Estimation from Central and Peripheral Visual Displays and Simulator Cockpit Motion

    NASA Technical Reports Server (NTRS)

    Hosman, R. J. A. W.; Vandervaart, J. C.

    1984-01-01

    An experiment to investigate visual roll attitude and roll rate perception is described. The experiment was also designed to assess the improvements of perception due to cockpit motion. After the onset of the motion, subjects were to make accurate and quick estimates of the final magnitude of the roll angle step response by pressing the appropriate button of a keyboard device. The differing time-histories of roll angle, roll rate and roll acceleration caused by a step response stimulate the different perception processes related the central visual field, peripheral visual field and vestibular organs in different, yet exactly known ways. Experiments with either of the visual displays or cockpit motion and some combinations of these were run to asses the roles of the different perception processes. Results show that the differences in response time are much more pronounced than the differences in perception accuracy.

  17. The influence of ship motion of manual control skills

    NASA Technical Reports Server (NTRS)

    Mcleod, P.; Poulton, C.; Duross, H.; Lewis, W.

    1981-01-01

    The effects of ship motion on a range of typical manual control skills were examined on the Warren Spring ship motion simulator driven in heave, pitch, and roll by signals taken from the frigate HMS Avenger at 13 m/s (25 knots) into a force 4 wind. The motion produced a vertical r.m.s. acceleration of 0.024g, mostly between 0.1 and 0.3 Hz, with comparatively little pitch or roll. A task involving unsupported arm movements was seriously affected by the motion; a pursuit tracking task showed a reliable decrement although it was still performed reasonably well (pressure and free moving tracking controls were affected equally by the motion); a digit keying task requiring ballistic hand movements was unaffected. There was no evidence that these effects were caused by sea sickness. The differing response to motion of the different tasks, from virtual destruction to no effect, suggests that a major benefit could come from an attempt to design the man/control interface onboard ship around motion resistant tasks.

  18. Phase-linking and the perceived motion during off-vertical axis rotation.

    PubMed

    Holly, Jan E; Wood, Scott J; McCollum, Gin

    2010-01-01

    Human off-vertical axis rotation (OVAR) in the dark typically produces perceived motion about a cone, the amplitude of which changes as a function of frequency. This perception is commonly attributed to the fact that both the OVAR and the conical motion have a gravity vector that rotates about the subject. Little-known, however, is that this rotating-gravity explanation for perceived conical motion is inconsistent with basic observations about self-motion perception: (a) that the perceived vertical moves toward alignment with the gravito-inertial acceleration (GIA) and (b) that perceived translation arises from perceived linear acceleration, as derived from the portion of the GIA not associated with gravity. Mathematically proved in this article is the fact that during OVAR these properties imply mismatched phase of perceived tilt and translation, in contrast to the common perception of matched phases which correspond to conical motion with pivot at the bottom. This result demonstrates that an additional perceptual rule is required to explain perception in OVAR. This study investigates, both analytically and computationally, the phase relationship between tilt and translation at different stimulus rates-slow (45 degrees /s) and fast (180 degrees /s), and the three-dimensional shape of predicted perceived motion, under different sets of hypotheses about self-motion perception. We propose that for human motion perception, there is a phase-linking of tilt and translation movements to construct a perception of one's overall motion path. Alternative hypotheses to achieve the phase match were tested with three-dimensional computational models, comparing the output with published experimental reports. The best fit with experimental data was the hypothesis that the phase of perceived translation was linked to perceived tilt, while the perceived tilt was determined by the GIA. This hypothesis successfully predicted the bottom-pivot cone commonly reported and a reduced sense of tilt during fast OVAR. Similar considerations apply to the hilltop illusion often reported during horizontal linear oscillation. Known response properties of central neurons are consistent with this ability to phase-link translation with tilt. In addition, the competing "standard" model was mathematically proved to be unable to predict the bottom-pivot cone regardless of the values used for parameters in the model.

  19. Visual motion perception predicts driving hazard perception ability.

    PubMed

    Lacherez, Philippe; Au, Sandra; Wood, Joanne M

    2014-02-01

    To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. A total of 36 visually normal participants (aged 19-80 years) completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus and sensitivity for displacement in a random dot kinematogram (Dmin ). Participants also completed a hazard perception test (HPT), which measured participants' response times to hazards embedded in video recordings of real-world driving, which has been shown to be linked to crash risk. Dmin for the random dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception to develop better interventions to improve road safety. © 2012 The Authors. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.

  20. Reverse control for humanoid robot task recognition.

    PubMed

    Hak, Sovannara; Mansard, Nicolas; Stasse, Olivier; Laumond, Jean Paul

    2012-12-01

    Efficient methods to perform motion recognition have been developed using statistical tools. Those methods rely on primitive learning in a suitable space, for example, the latent space of the joint angle and/or adequate task spaces. Learned primitives are often sequential: A motion is segmented according to the time axis. When working with a humanoid robot, a motion can be decomposed into parallel subtasks. For example, in a waiter scenario, the robot has to keep some plates horizontal with one of its arms while placing a plate on the table with its free hand. Recognition can thus not be limited to one task per consecutive segment of time. The method presented in this paper takes advantage of the knowledge of what tasks the robot is able to do and how the motion is generated from this set of known controllers, to perform a reverse engineering of an observed motion. This analysis is intended to recognize parallel tasks that have been used to generate a motion. The method relies on the task-function formalism and the projection operation into the null space of a task to decouple the controllers. The approach is successfully applied on a real robot to disambiguate motion in different scenarios where two motions look similar but have different purposes.

  1. Perception of linear horizontal self-motion induced by peripheral vision /linearvection/ - Basic characteristics and visual-vestibular interactions

    NASA Technical Reports Server (NTRS)

    Berthoz, A.; Pavard, B.; Young, L. R.

    1975-01-01

    The basic characteristics of the sensation of linear horizontal motion have been studied. Objective linear motion was induced by means of a moving cart. Visually induced linear motion perception (linearvection) was obtained by projection of moving images at the periphery of the visual field. Image velocity and luminance thresholds for the appearance of linearvection have been measured and are in the range of those for image motion detection (without sensation of self motion) by the visual system. Latencies of onset are around 1 sec and short term adaptation has been shown. The dynamic range of the visual analyzer as judged by frequency analysis is lower than the vestibular analyzer. Conflicting situations in which visual cues contradict vestibular and other proprioceptive cues show, in the case of linearvection a dominance of vision which supports the idea of an essential although not independent role of vision in self motion perception.

  2. The effect of visual-motion time delays on pilot performance in a pursuit tracking task

    NASA Technical Reports Server (NTRS)

    Miller, G. K., Jr.; Riley, D. R.

    1976-01-01

    A study has been made to determine the effect of visual-motion time delays on pilot performance of a simulated pursuit tracking task. Three interrelated major effects have been identified: task difficulty, motion cues, and time delays. As task difficulty, as determined by airplane handling qualities or target frequency, increases, the amount of acceptable time delay decreases. However, when relatively complete motion cues are included in the simulation, the pilot can maintain his performance for considerably longer time delays. In addition, the number of degrees of freedom of motion employed is a significant factor.

  3. The perception of object versus objectless motion.

    PubMed

    Hock, Howard S; Nichols, David F

    2013-05-01

    Wertheimer, M. (Zeitschrift für Psychologie und Physiologie der Sinnesorgane, 61:161-265, 1912) classical distinction between beta (object) and phi (objectless) motion is elaborated here in a series of experiments concerning competition between two qualitatively different motion percepts, induced by sequential changes in luminance for two-dimensional geometric objects composed of rectangular surfaces. One of these percepts is of spreading-luminance motion that continuously sweeps across the entire object; it exhibits shape invariance and is perceived most strongly for fast speeds. Significantly for the characterization of phi as objectless motion, the spreading luminance does not involve surface boundaries or any other feature; the percept is driven solely by spatiotemporal changes in luminance. Alternatively, and for relatively slow speeds, a discrete series of edge motions can be perceived in the direction opposite to spreading-luminance motion. Akin to beta motion, the edges appear to move through intermediate positions within the object's changing surfaces. Significantly for the characterization of beta as object motion, edge motion exhibits shape dependence and is based on the detection of oppositely signed changes in contrast (i.e., counterchange) for features essential to the determination of an object's shape, the boundaries separating its surfaces. These results are consistent with area MT neurons that differ with respect to speed preference Newsome et al (Journal of Neurophysiology, 55:1340-1351, 1986) and shape dependence Zeki (Journal of Physiology, 236:549-573, 1974).

  4. Unconscious Local Motion Alters Global Image Speed

    PubMed Central

    Khuu, Sieu K.; Chung, Charles Y. L.; Lord, Stephanie; Pearson, Joel

    2014-01-01

    Accurate motion perception of self and object speed is crucial for successful interaction in the world. The context in which we make such speed judgments has a profound effect on their accuracy. Misperceptions of motion speed caused by the context can have drastic consequences in real world situations, but they also reveal much about the underlying mechanisms of motion perception. Here we show that motion signals suppressed from awareness can warp simultaneous conscious speed perception. In Experiment 1, we measured global speed discrimination thresholds using an annulus of 8 local Gabor elements. We show that physically removing local elements from the array attenuated global speed discrimination. However, removing awareness of the local elements only had a small effect on speed discrimination. That is, unconscious local motion elements contributed to global conscious speed perception. In Experiment 2 we measured the global speed of the moving Gabor patterns, when half the elements moved at different speeds. We show that global speed averaging occurred regardless of whether local elements were removed from awareness, such that the speed of invisible elements continued to be averaged together with the visible elements to determine the global speed. These data suggest that contextual motion signals outside of awareness can both boost and affect our experience of motion speed, and suggest that such pooling of motion signals occurs before the conscious extraction of the surround motion speed. PMID:25503603

  5. Clinical Assessment of Stereoacuity and 3-D Stereoscopic Entertainment

    PubMed Central

    Tidbury, Laurence P.; Black, Robert H.; O’Connor, Anna R.

    2015-01-01

    Abstract Background/Aims: The perception of compelling depth is often reported in individuals where no clinically measurable stereoacuity is apparent. We aim to investigate the potential cause of this finding by varying the amount of stereopsis available to the subject, and assessing their perception of depth viewing 3-D video clips and a Nintendo 3DS. Methods: Monocular blur was used to vary interocular VA difference, consequently creating 4 levels of measurable binocular deficit from normal stereoacuity to suppression. Stereoacuity was assessed at each level using the TNO, Preschool Randot®, Frisby, the FD2, and Distance Randot®. Subjects also completed an object depth identification task using the Nintendo 3DS, a static 3DTV stereoacuity test, and a 3-D perception rating task of 6 video clips. Results: As intraocular VA differences increased, stereoacuity of the 57 subjects (aged 16–62 years) decreased (eg, 110”, 280”, 340”, and suppression). The ability to correctly identify depth on the Nintendo 3DS remained at 100% until suppression of one eye occurred. The perception of a compelling 3-D effect when viewing the video clips was rated high until suppression of one eye occurred, where the 3-D effect was still reported as fairly evident. Conclusion: If an individual has any level of measurable stereoacuity, the perception of 3-D when viewing stereoscopic entertainment is present. The presence of motion in stereoscopic video appears to provide cues to depth, where static cues are not sufficient. This suggests there is a need for a dynamic test of stereoacuity to be developed, to allow fully informed patient management decisions to be made. PMID:26669421

  6. The effect of occlusion therapy on motion perception deficits in amblyopia.

    PubMed

    Giaschi, Deborah; Chapman, Christine; Meier, Kimberly; Narasimhan, Sathyasri; Regan, David

    2015-09-01

    There is growing evidence for deficits in motion perception in amblyopia, but these are rarely assessed clinically. In this prospective study we examined the effect of occlusion therapy on motion-defined form perception and multiple-object tracking. Participants included children (3-10years old) with unilateral anisometropic and/or strabismic amblyopia who were currently undergoing occlusion therapy and age-matched control children with normal vision. At the start of the study, deficits in motion-defined form perception were present in at least one eye in 69% of the children with amblyopia. These deficits were still present at the end of the study in 55% of the amblyopia group. For multiple-object tracking, deficits were present initially in 64% and finally in 55% of the children with amblyopia, even after completion of occlusion therapy. Many of these deficits persisted in spite of an improvement in amblyopic eye visual acuity in response to occlusion therapy. The prevalence of motion perception deficits in amblyopia as well as their resistance to occlusion therapy, support the need for new approaches to amblyopia treatment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Alterations to global but not local motion processing in long-term ecstasy (MDMA) users.

    PubMed

    White, Claire; Brown, John; Edwards, Mark

    2014-07-01

    Growing evidence indicates that the main psychoactive ingredient in the illegal drug "ecstasy" (methylendioxymethamphetamine) causes reduced activity in the serotonin and gamma-aminobutyric acid (GABA) systems in humans. On the basis of substantial serotonin input to the occipital lobe, recent research investigated visual processing in long-term users and found a larger magnitude of the tilt aftereffect, interpreted to reflect broadened orientation tuning bandwidths. Further research found higher orientation discrimination thresholds and reduced long-range interactions in the primary visual area of ecstasy users. The aim of the present research was to investigate whether serotonin-mediated V1 visual processing deficits in ecstasy users extend to motion processing mechanisms. Forty-five participants (21 controls, 24 drug users) completed two psychophysical studies: A direction discrimination study directly measured local motion processing in V1, while a motion coherence task tested global motion processing in area V5/MT. "Primary" ecstasy users (n = 18), those without substantial polydrug use, had significantly lower global motion thresholds than controls [p = 0.027, Cohen's d = 0.78 (large)], indicating increased sensitivity to global motion stimuli, but no difference in local motion processing (p = 0.365). These results extend on previous research investigating the long-term effects of illicit drugs on visual processing. Two possible explanations are explored: defuse attentional processes may be facilitating spatial pooling of motion signals in users. Alternatively, it may be that a GABA-mediated disruption to V5/MT processing is reducing spatial suppression and therefore improving global motion perception in ecstasy users.

  8. Contrast effects on speed perception for linear and radial motion.

    PubMed

    Champion, Rebecca A; Warren, Paul A

    2017-11-01

    Speed perception is vital for safe activity in the environment. However, considerable evidence suggests that perceived speed changes as a function of stimulus contrast, with some investigators suggesting that this might have meaningful real-world consequences (e.g. driving in fog). In the present study we investigate whether the neural effects of contrast on speed perception occur at the level of local or global motion processing. To do this we examine both speed discrimination thresholds and contrast-dependent speed perception for two global motion configurations that have matched local spatio-temporal structure. Specifically we compare linear and radial configurations, the latter of which arises very commonly due to self-movement. In experiment 1 the stimuli comprised circular grating patches. In experiment 2, to match stimuli even more closely, motion was presented in multiple local Gabor patches equidistant from central fixation. Each patch contained identical linear motion but the global configuration was either consistent with linear or radial motion. In both experiments 1 and 2, discrimination thresholds and contrast-induced speed biases were similar in linear and radial conditions. These results suggest that contrast-based speed effects occur only at the level of local motion processing, irrespective of global structure. This result is interpreted in the context of previous models of speed perception and evidence suggesting differences in perceived speed of locally matched linear and radial stimuli. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Brief report: altered horizontal binding of single dots to coherent motion in autism.

    PubMed

    David, Nicole; Rose, Michael; Schneider, Till R; Vogeley, Kai; Engel, Andreas K

    2010-12-01

    Individuals with autism often show a fragmented way of perceiving their environment, suggesting a disorder of information integration, possibly due to disrupted communication between brain areas. We investigated thirteen individuals with high-functioning autism (HFA) and thirteen healthy controls using the metastable motion quartet, a stimulus consisting of two dots alternately presented at four locations of a hypothetical square, thereby inducing an apparent motion percept. This percept is vertical or horizontal, the latter requiring binding of motion signals across cerebral hemispheres. Decreasing the horizontal distance between dots could facilitate horizontal percepts. We found evidence for altered horizontal binding in HFA: Individuals with HFA needed stronger facilitation to experience horizontal motion. These data are interpreted in light of reduced cross-hemispheric communication.

  10. Thresholds for the perception of whole-body linear sinusoidal motion in the horizontal plane

    NASA Technical Reports Server (NTRS)

    Mah, Robert W.; Young, Laurence R.; Steele, Charles R.; Schubert, Earl D.

    1989-01-01

    An improved linear sled has been developed to provide precise motion stimuli without generating perceptible extraneous motion cues (a noiseless environment). A modified adaptive forced-choice method was employed to determine perceptual thresholds to whole-body linear sinusoidal motion in 25 subjects. Thresholds for the detection of movement in the horizontal plane were found to be lower than those reported previously. At frequencies of 0.2 to 0.5 Hz, thresholds were shown to be independent of frequency, while at frequencies of 1.0 to 3.0 Hz, thresholds showed a decreasing sensitivity with increasing frequency, indicating that the perceptual process is not sensitive to the rate change of acceleration of the motion stimulus. The results suggest that the perception of motion behaves as an integrating accelerometer with a bandwidth of at least 3 Hz.

  11. Self-motion perception and vestibulo-ocular reflex during whole body yaw rotation in standing subjects: the role of head position and neck proprioception.

    PubMed

    Panichi, Roberto; Botti, Fabio Massimo; Ferraresi, Aldo; Faralli, Mario; Kyriakareli, Artemis; Schieppati, Marco; Pettorossi, Vito Enrico

    2011-04-01

    Self-motion perception and vestibulo-ocular reflex (VOR) were studied during whole body yaw rotation in the dark at different static head positions. Rotations consisted of four cycles of symmetric sinusoidal and asymmetric oscillations. Self-motion perception was evaluated by measuring the ability of subjects to manually track a static remembered target. VOR was recorded separately and the slow phase eye position (SPEP) was computed. Three different head static yaw deviations (active and passive) relative to the trunk (0°, 45° to right and 45° to left) were examined. Active head deviations had a significant effect during asymmetric oscillation: the movement perception was enhanced when the head was kept turned toward the side of body rotation and decreased in the opposite direction. Conversely, passive head deviations had no effect on movement perception. Further, vibration (100 Hz) of the neck muscles splenius capitis and sternocleidomastoideus remarkably influenced perceived rotation during asymmetric oscillation. On the other hand, SPEP of VOR was modulated by active head deviation, but was not influenced by neck muscle vibration. Through its effects on motion perception and reflex gain, head position improved gaze stability and enhanced self-motion perception in the direction of the head deviation. Copyright © 2010 Elsevier B.V. All rights reserved.

  12. Stereomotion speed perception is contrast dependent

    NASA Technical Reports Server (NTRS)

    Brooks, K.

    2001-01-01

    The effect of contrast on the perception of stimulus speed for stereomotion and monocular lateral motion was investigated for successive matches in random-dot stimuli. The familiar 'Thompson effect'--that a reduction in contrast leads to a reduction in perceived speed--was found in similar proportions for both binocular images moving in depth, and for monocular images translating laterally. This result is consistent with the idea that the monocular motion system has a significant input to the stereomotion system, and dominates the speed percept for approaching motion.

  13. Aging and Vision

    PubMed Central

    Owsley, Cynthia

    2010-01-01

    Given the increasing size of the older adult population in many countries, there is a pressing need to identify the nature of aging-related vision impairments, their underlying mechanisms, and how they impact older adults’ performance of everyday visual tasks. The results of this research can then be used to develop and evaluate interventions to slow or reverse aging-related declines in vision, thereby improving quality of life. Here we summarize salient developments in research on aging and vision over the past 25 years, focusing on spatial contrast sensitivity, vision under low luminance, temporal sensitivity and motion perception, and visual processing speed. PMID:20974168

  14. Holistic processing of static and moving faces.

    PubMed

    Zhao, Mintao; Bülthoff, Isabelle

    2017-07-01

    Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability-holistic face processing-remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based face processing by manipulating the presence of facial motion during study and at test in a composite face task. The results showed that rigidly moving faces were processed as holistically as static faces (Experiment 1). Holistic processing of moving faces persisted whether facial motion was presented during study, at test, or both (Experiment 2). Moreover, when faces were inverted to eliminate the contributions of both an upright face template and observers' expertise with upright faces, rigid facial motion facilitated holistic face processing (Experiment 3). Thus, holistic processing represents a general principle of face perception that applies to both static and dynamic faces, rather than being limited to static faces. These results support an emerging view that both perceiver-based and face-based factors contribute to holistic face processing, and they offer new insights on what underlies holistic face processing, how information supporting holistic face processing interacts with each other, and why facial motion may affect face recognition and holistic face processing differently. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Neural dynamics of motion processing and speed discrimination.

    PubMed

    Chey, J; Grossberg, S; Mingolla, E

    1998-09-01

    A neural network model of visual motion perception and speed discrimination is presented. The model shows how a distributed population code of speed tuning, that realizes a size-speed correlation, can be derived from the simplest mechanisms whereby activations of multiple spatially short-range filters of different size are transformed into speed-turned cell responses. These mechanisms use transient cell responses to moving stimuli, output thresholds that covary with filter size, and competition. These mechanisms are proposed to occur in the V1-->MT cortical processing stream. The model reproduces empirically derived speed discrimination curves and simulates data showing how visual speed perception and discrimination can be affected by stimulus contrast, duration, dot density and spatial frequency. Model motion mechanisms are analogous to mechanisms that have been used to model 3-D form and figure-ground perception. The model forms the front end of a larger motion processing system that has been used to simulate how global motion capture occurs, and how spatial attention is drawn to moving forms. It provides a computational foundation for an emerging neural theory of 3-D form and motion perception.

  16. Affordance Realization in Climbing: Learning and Transfer.

    PubMed

    Seifert, Ludovic; Orth, Dominic; Mantel, Bruno; Boulanger, Jérémie; Hérault, Romain; Dicks, Matt

    2018-01-01

    The aim of this study was to investigate how the affordances of an indoor climbing wall changed for intermediate climbers following a period of practice during which hold orientation was manipulated within a learning and transfer protocol. The learning protocol consisted of four sessions, in which eight climbers randomly ascended three different routes of fixed absolute difficulty (5c on the French scale), as fluently as possible. All three routes were 10.3 m in height and composed of 20 hand-holds at the same locations on an artificial climbing wall; only hold orientations were altered: (i) a horizontal-edge route (H) was designed to afford horizontal hold grasping, (ii) a vertical-edge route (V) afforded vertical hold grasping, and (iii), a double-edge route (D) was designed to afford both horizontal and vertical hold grasping. Five inertial measurement units (IMU) (3D accelerometer, 3D gyroscope, 3D magnetometer) were attached to the hip, feet and forearms to analyze the vertical acceleration and direction (3D unitary vector) of each limb and hip in ambient space during the entire ascent. Segmentation and classification processes supported detection of movement and stationary phases for each IMU. Depending on whether limbs and/or hip were moving, a decision tree distinguished four states of behavior: stationary (absence of limb and hip motion), hold exploration (absence of hip motion but at least one limb in motion), hip movement (hip in motion but absence of limb motion) and global motion (hip in motion and at least one limb in motion). Results showed that with practice, the learners decreased the relative duration of hold exploration, suggesting that they improved affordance perception of hold grasp-ability. The number of performatory movements also decreased as performance increased during learning sessions, confirming that participants' climbing efficacy improved as a function of practice. Last, the results were more marked for the H route, while the D route led to longer relative stationary duration and a shorter relative duration of performatory states. Together, these findings emphasized the benefit of manipulating task constraints to promote safe exploration during learning, which is particularly relevant in extreme sports involving climbing tasks.

  17. The effect of visual-motion time-delays on pilot performance in a simulated pursuit tracking task

    NASA Technical Reports Server (NTRS)

    Miller, G. K., Jr.; Riley, D. R.

    1977-01-01

    An experimental study was made to determine the effect on pilot performance of time delays in the visual and motion feedback loops of a simulated pursuit tracking task. Three major interrelated factors were identified: task difficulty either in the form of airplane handling qualities or target frequency, the amount and type of motion cues, and time delay itself. In general, the greater the task difficulty, the smaller the time delay that could exist without degrading pilot performance. Conversely, the greater the motion fidelity, the greater the time delay that could be tolerated. The effect of motion was, however, pilot dependent.

  18. Do we track what we see? Common versus independent processing for motion perception and smooth pursuit eye movements: a review.

    PubMed

    Spering, Miriam; Montagnini, Anna

    2011-04-22

    Many neurophysiological studies in monkeys have indicated that visual motion information for the guidance of perception and smooth pursuit eye movements is - at an early stage - processed in the same visual pathway in the brain, crucially involving the middle temporal area (MT). However, these studies left some questions unanswered: Are perception and pursuit driven by the same or independent neuronal signals within this pathway? Are the perceptual interpretation of visual motion information and the motor response to visual signals limited by the same source of neuronal noise? Here, we review psychophysical studies that were motivated by these questions and compared perception and pursuit behaviorally in healthy human observers. We further review studies that focused on the interaction between perception and pursuit. The majority of results point to similarities between perception and pursuit, but dissociations were also reported. We discuss recent developments in this research area and conclude with suggestions for common and separate principles for the guidance of perceptual and motor responses to visual motion information. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. Computer-enhanced laparoscopic training system (CELTS): bridging the gap.

    PubMed

    Stylopoulos, N; Cotin, S; Maithel, S K; Ottensmeye, M; Jackson, P G; Bardsley, R S; Neumann, P F; Rattner, D W; Dawson, S L

    2004-05-01

    There is a large and growing gap between the need for better surgical training methodologies and the systems currently available for such training. In an effort to bridge this gap and overcome the disadvantages of the training simulators now in use, we developed the Computer-Enhanced Laparoscopic Training System (CELTS). CELTS is a computer-based system capable of tracking the motion of laparoscopic instruments and providing feedback about performance in real time. CELTS consists of a mechanical interface, a customizable set of tasks, and an Internet-based software interface. The special cognitive and psychomotor skills a laparoscopic surgeon should master were explicitly defined and transformed into quantitative metrics based on kinematics analysis theory. A single global standardized and task-independent scoring system utilizing a z-score statistic was developed. Validation exercises were performed. The scoring system clearly revealed a gap between experts and trainees, irrespective of the task performed; none of the trainees obtained a score above the threshold that distinguishes the two groups. Moreover, CELTS provided educational feedback by identifying the key factors that contributed to the overall score. Among the defined metrics, depth perception, smoothness of motion, instrument orientation, and the outcome of the task are major indicators of performance and key parameters that distinguish experts from trainees. Time and path length alone, which are the most commonly used metrics in currently available systems, are not considered good indicators of performance. CELTS is a novel and standardized skills trainer that combines the advantages of computer simulation with the features of the traditional and popular training boxes. CELTS can easily be used with a wide array of tasks and ensures comparability across different training conditions. This report further shows that a set of appropriate and clinically relevant performance metrics can be defined and a standardized scoring system can be designed.

  20. Time course influences transfer of visual perceptual learning across spatial location.

    PubMed

    Larcombe, S J; Kennard, C; Bridge, H

    2017-06-01

    Visual perceptual learning describes the improvement of visual perception with repeated practice. Previous research has established that the learning effects of perceptual training may be transferable to untrained stimulus attributes such as spatial location under certain circumstances. However, the mechanisms involved in transfer have not yet been fully elucidated. Here, we investigated the effect of altering training time course on the transferability of learning effects. Participants were trained on a motion direction discrimination task or a sinusoidal grating orientation discrimination task in a single visual hemifield. The 4000 training trials were either condensed into one day, or spread evenly across five training days. When participants were trained over a five-day period, there was transfer of learning to both the untrained visual hemifield and the untrained task. In contrast, when the same amount of training was condensed into a single day, participants did not show any transfer of learning. Thus, learning time course may influence the transferability of perceptual learning effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Tilt and Translation Motion Perception during Off Vertical Axis Rotation

    NASA Technical Reports Server (NTRS)

    Wood, Scott J.; Reschke, Millard F.; Clement, Gilles

    2006-01-01

    The effect of stimulus frequency on tilt and translation motion perception was studied during constant velocity off-vertical axis rotation (OVAR), and compared to the effect of stimulus frequency on eye movements. Fourteen healthy subjects were rotated in darkness about their longitudinal axis 10deg and 20deg off-vertical at 0.125 Hz, and 20deg offvertical at 0.5 Hz. Oculomotor responses were recorded using videography, and perceived motion was evaluated using verbal reports and a joystick with four degrees of freedom (pitch and roll tilt, mediallateral and anteriorposterior translation). During the lower frequency OVAR, subjects reported the perception of progressing along the edge of a cone. During higher frequency OVAR, subjects reported the perception of progressing along the edge of an upright cylinder. The modulation of both tilt recorded from the joystick and ocular torsion significantly increased as the tilt angle increased from 10deg to 20deg at 0.125 Hz, and then decreased at 0.5 Hz. Both tilt perception and torsion slightly lagged head orientation at 0.125 Hz. The phase lag of torsion increased at 0.5 Hz, while the phase of tilt perception did not change as a function of frequency. The amplitude of both translation perception recorded from the joystick and horizontal eye movements was negligible at 0.125 Hz and increased as a function of stimulus frequency. While the phase lead of horizontal eye movements decreased at 0.5 Hz, the phase of translation perception did not vary with stimulus frequency and was similar to the phase of tilt perception during all conditions. During dynamic linear acceleration in the absence of other sensory input (canal, vision) a change in stimulus frequency alone elicits similar changes in the amplitude of both self motion perception and eye movements. However, in contrast to the eye movements, the phase of both perceived tilt and translation motion is not altered by stimulus frequency. We conclude that the neural processing to distinguish tilt and translation linear acceleration stimuli differs between eye movements and motion perception.

  2. Sliding Mode Control of Real-Time PNU Vehicle Driving Simulator and Its Performance Evaluation

    NASA Astrophysics Data System (ADS)

    Lee, Min Cheol; Park, Min Kyu; Yoo, Wan Suk; Son, Kwon; Han, Myung Chul

    This paper introduces an economical and effective full-scale driving simulator for study of human sensibility and development of new vehicle parts and its control. Real-time robust control to accurately reappear a various vehicle motion may be a difficult task because the motion platform is the nonlinear complex system. This study proposes the sliding mode controller with a perturbation compensator using observer-based fuzzy adaptive network (FAN). This control algorithm is designed to solve the chattering problem of a sliding mode control and to select the adequate fuzzy parameters of the perturbation compensator. For evaluating the trajectory control performance of the proposed approach, a tracking control of the developed simulator named PNUVDS is experimentally carried out. And then, the driving performance of the simulator is evaluated by using human perception and sensibility of some drivers in various driving conditions.

  3. Misperceptions in the Trajectories of Objects undergoing Curvilinear Motion

    PubMed Central

    Yilmaz, Ozgur; Tripathy, Srimant P.; Ogmen, Haluk

    2012-01-01

    Trajectory perception is crucial in scene understanding and action. A variety of trajectory misperceptions have been reported in the literature. In this study, we quantify earlier observations that reported distortions in the perceived shape of bilinear trajectories and in the perceived positions of their deviation. Our results show that bilinear trajectories with deviation angles smaller than 90 deg are perceived smoothed while those with deviation angles larger than 90 degrees are perceived sharpened. The sharpening effect is weaker in magnitude than the smoothing effect. We also found a correlation between the distortion of perceived trajectories and the perceived shift of their deviation point. Finally, using a dual-task paradigm, we found that reducing attentional resources allocated to the moving target causes an increase in the perceived shift of the deviation point of the trajectory. We interpret these results in the context of interactions between motion and position systems. PMID:22615775

  4. What can fish brains tell us about visual perception?

    PubMed Central

    Rosa Salva, Orsola; Sovrano, Valeria Anna; Vallortigara, Giorgio

    2014-01-01

    Fish are a complex taxonomic group, whose diversity and distance from other vertebrates well suits the comparative investigation of brain and behavior: in fish species we observe substantial differences with respect to the telencephalic organization of other vertebrates and an astonishing variety in the development and complexity of pallial structures. We will concentrate on the contribution of research on fish behavioral biology for the understanding of the evolution of the visual system. We shall review evidence concerning perceptual effects that reflect fundamental principles of the visual system functioning, highlighting the similarities and differences between distant fish groups and with other vertebrates. We will focus on perceptual effects reflecting some of the main tasks that the visual system must attain. In particular, we will deal with subjective contours and optical illusions, invariance effects, second order motion and biological motion and, finally, perceptual binding of object properties in a unified higher level representation. PMID:25324728

  5. Perceived state of self during motion can differentially modulate numerical magnitude allocation.

    PubMed

    Arshad, Q; Nigmatullina, Y; Roberts, R E; Goga, U; Pikovsky, M; Khan, S; Lobo, R; Flury, A-S; Pettorossi, V E; Cohen-Kadosh, R; Malhotra, P A; Bronstein, A M

    2016-09-01

    Although a direct relationship between numerical allocation and spatial attention has been proposed, recent research suggests that these processes are not directly coupled. In keeping with this, spatial attention shifts induced either via visual or vestibular motion can modulate numerical allocation in some circumstances but not in others. In addition to shifting spatial attention, visual or vestibular motion paradigms also (i) elicit compensatory eye movements which themselves can influence numerical processing and (ii) alter the perceptual state of 'self', inducing changes in bodily self-consciousness impacting upon cognitive mechanisms. Thus, the precise mechanism by which motion modulates numerical allocation remains unknown. We sought to investigate the influence that different perceptual experiences of motion have upon numerical magnitude allocation while controlling for both eye movements and task-related effects. We first used optokinetic visual motion stimulation (OKS) to elicit the perceptual experience of either 'visual world' or 'self'-motion during which eye movements were identical. In a second experiment, we used a vestibular protocol examining the effects of perceived and subliminal angular rotations in darkness, which also provoked identical eye movements. We observed that during the perceptual experience of 'visual world' motion, rightward OKS-biased judgments towards smaller numbers, whereas leftward OKS-biased judgments towards larger numbers. During the perceptual experience of 'self-motion', judgments were biased towards larger numbers irrespective of the OKS direction. Contrastingly, vestibular motion perception was found not to modulate numerical magnitude allocation, nor was there any differential modulation when comparing 'perceived' vs. 'subliminal' rotations. We provide a novel demonstration that numerical magnitude allocation can be differentially modulated by the perceptual state of self during visual but not vestibular mediated motion. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  6. Sensory perception. [role of human vestibular system in dynamic space perception and manual vehicle control

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The effect of motion on the ability of men to perform a variety of control actions was investigated. Special attention was given to experimental and analytical studies of the dynamic characteristics of the otoliths and semicircular canals using a two axis angular motion simulator and a one axis linear motion simulator.

  7. The Perception of Biological and Mechanical Motion in Female Fragile X Premutation Carriers

    ERIC Educational Resources Information Center

    Keri, Szabolcs; Benedek, Gyorgy

    2010-01-01

    Previous studies reported impaired visual information processing in patients with fragile x syndrome and in premutation carriers. In this study, we assessed the perception of biological motion (a walking point-light character) and mechanical motion (a rotating shape) in 25 female fragile x premutation carriers and in 20 healthy non-carrier…

  8. The Effects of Self-Esteem and Task Perception on Goal Setting, Efficacy, and Task Performance.

    ERIC Educational Resources Information Center

    Tang, Thomas Li-Ping; Reynolds, David Bryan

    This study examined the effects of self-esteem and task perception on goal setting, efficacy, and task performance in 52 recreational dart throwers who were members of two dart organizations. Task perception was manipulated by asking each dart thrower to compete against self, a difficult competitor, and an easy competitor on the same dart game.…

  9. Phase-linking and the perceived motion during off-vertical axis rotation

    PubMed Central

    Wood, Scott J.; McCollum, Gin

    2010-01-01

    Human off-vertical axis rotation (OVAR) in the dark typically produces perceived motion about a cone, the amplitude of which changes as a function of frequency. This perception is commonly attributed to the fact that both the OVAR and the conical motion have a gravity vector that rotates about the subject. Little-known, however, is that this rotating-gravity explanation for perceived conical motion is inconsistent with basic observations about self-motion perception: (a) that the perceived vertical moves toward alignment with the gravito-inertial acceleration (GIA) and (b) that perceived translation arises from perceived linear acceleration, as derived from the portion of the GIA not associated with gravity. Mathematically proved in this article is the fact that during OVAR these properties imply mismatched phase of perceived tilt and translation, in contrast to the common perception of matched phases which correspond to conical motion with pivot at the bottom. This result demonstrates that an additional perceptual rule is required to explain perception in OVAR. This study investigates, both analytically and computationally, the phase relationship between tilt and translation at different stimulus rates—slow (45°/s) and fast (180°/s), and the three-dimensional shape of predicted perceived motion, under different sets of hypotheses about self-motion perception. We propose that for human motion perception, there is a phase-linking of tilt and translation movements to construct a perception of one’s overall motion path. Alternative hypotheses to achieve the phase match were tested with three-dimensional computational models, comparing the output with published experimental reports. The best fit with experimental data was the hypothesis that the phase of perceived translation was linked to perceived tilt, while the perceived tilt was determined by the GIA. This hypothesis successfully predicted the bottom-pivot cone commonly reported and a reduced sense of tilt during fast OVAR. Similar considerations apply to the hilltop illusion often reported during horizontal linear oscillation. Known response properties of central neurons are consistent with this ability to phase-link translation with tilt. In addition, the competing “standard” model was mathematically proved to be unable to predict the bottom-pivot cone regardless of the values used for parameters in the model. PMID:19937069

  10. A novel method for quantifying arm motion similarity.

    PubMed

    Zhi Li; Hauser, Kris; Roldan, Jay Ryan; Milutinovic, Dejan; Rosen, Jacob

    2015-08-01

    This paper proposes a novel task-independent method for quantifying arm motion similarity that can be applied to any kinematic/dynamic variable of interest. Given two arm motions for the same task, not necessarily with the same completion time, it plots the time-normalized curves against one another and generates four real-valued features. To validate these features we apply them to quantify the relationship between healthy and paretic arm motions of chronic stroke patients. Studying both unimanual and bimanual arm motions of eight chronic stroke patients, we find that inter-arm coupling that tends to synchronize the motions of both arms in bimanual motions, has a stronger effect at task-relevant joints than at task-irrelevant joints. It also revealed that the paretic arm suppresses the shoulder flexion of the non-paretic arm, while the latter encourages the shoulder rotation of the former.

  11. Stimulus factors in motion perception and spatial orientation

    NASA Technical Reports Server (NTRS)

    Post, R. B.; Johnson, C. A.

    1984-01-01

    The Malcolm horizon utilizes a large projected light stimulus Peripheral Vision Horizon Device (PVHD) as an attitude indicator in order to achieve a more compelling sense of roll than is obtained with smaller devices. The basic principle is that the larger stimulus is more similar to visibility of a real horizon during roll, and does not require fixation and attention to the degree that smaller displays do. Successful implementation of such a device requires adjustment of the parameters of the visual stimulus so that its effects on motion perception and spatial orientation are optimized. With this purpose in mind, the effects of relevant image variables on the perception of object motion, self motion and spatial orientation are reviewed.

  12. How does cognitive load influence speech perception? An encoding hypothesis.

    PubMed

    Mitterer, Holger; Mattys, Sven L

    2017-01-01

    Two experiments investigated the conditions under which cognitive load exerts an effect on the acuity of speech perception. These experiments extend earlier research by using a different speech perception task (four-interval oddity task) and by implementing cognitive load through a task often thought to be modular, namely, face processing. In the cognitive-load conditions, participants were required to remember two faces presented before the speech stimuli. In Experiment 1, performance in the speech-perception task under cognitive load was not impaired in comparison to a no-load baseline condition. In Experiment 2, we modified the load condition minimally such that it required encoding of the two faces simultaneously with the speech stimuli. As a reference condition, we also used a visual search task that in earlier experiments had led to poorer speech perception. Both concurrent tasks led to decrements in the speech task. The results suggest that speech perception is affected even by loads thought to be processed modularly, and that, critically, encoding in working memory might be the locus of interference.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kennedy, R.P.; Kincaid, R.H.; Short, S.A.

    This report presents the results of part of a two-task study on the engineering characterization of earthquake ground motion for nuclear power plant design. Task I of the study, which is presented in NUREG/CR-3805, Vol. 1, developed a basis for selecting design response spectra taking into account the characteristics of free-field ground motion found to be significant in causing structural damage. Task II incorporates additional considerations of effects of spatial variations of ground motions and soil-structure interaction on foundation motions and structural response. The results of Task II are presented in four parts: (1) effects of ground motion characteristics onmore » structural response of a typical PWR reactor building with localized nonlinearities and soil-structure interaction effects; (2) empirical data on spatial variations of earthquake ground motion; (3) soil-structure interaction effects on structural response; and (4) summary of conclusions and recommendations based on Tasks I and II studies. This report presents the results of the first part of Task II. The results of the other parts will be presented in NUREG/CR-3805, Vols. 3 to 5.« less

  14. Visual Control for Multirobot Organized Rendezvous.

    PubMed

    Lopez-Nicolas, G; Aranda, M; Mezouar, Y; Sagues, C

    2012-08-01

    This paper addresses the problem of visual control of a set of mobile robots. In our framework, the perception system consists of an uncalibrated flying camera performing an unknown general motion. The robots are assumed to undergo planar motion considering nonholonomic constraints. The goal of the control task is to drive the multirobot system to a desired rendezvous configuration relying solely on visual information given by the flying camera. The desired multirobot configuration is defined with an image of the set of robots in that configuration without any additional information. We propose a homography-based framework relying on the homography induced by the multirobot system that gives a desired homography to be used to define the reference target, and a new image-based control law that drives the robots to the desired configuration by imposing a rigidity constraint. This paper extends our previous work, and the main contributions are that the motion constraints on the flying camera are removed, the control law is improved by reducing the number of required steps, the stability of the new control law is proved, and real experiments are provided to validate the proposal.

  15. Effects of proposed preflight adaptation training on eye movements, self-motion perception, and motion sickness - A progress report

    NASA Technical Reports Server (NTRS)

    Parker, D. E.; Reschke, M. F.; Von Gierke, H. E.; Lessard, C. S.

    1987-01-01

    The preflight adaptation trainer (PAT) was designed to produce rearranged relationships between visual and otolith signals analogous to those experienced in space. Investigations have been undertaken with three prototype trainers. The results indicated that exposure to the PAT sensory rearrangement altered self-motion perception, induced motion sickness, and changed the amplitude and phase of the horizontal eye movements evoked by roll stimulation. However, the changes were inconsistent.

  16. Inferring the direction of implied motion depends on visual awareness

    PubMed Central

    Faivre, Nathan; Koch, Christof

    2014-01-01

    Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction. PMID:24706951

  17. Inferring the direction of implied motion depends on visual awareness.

    PubMed

    Faivre, Nathan; Koch, Christof

    2014-04-04

    Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction.

  18. Modeling a space-variant cortical representation for apparent motion.

    PubMed

    Wurbs, Jeremy; Mingolla, Ennio; Yazdanbakhsh, Arash

    2013-08-06

    Receptive field sizes of neurons in early primate visual areas increase with eccentricity, as does temporal processing speed. The fovea is evidently specialized for slow, fine movements while the periphery is suited for fast, coarse movements. In either the fovea or periphery discrete flashes can produce motion percepts. Grossberg and Rudd (1989) used traveling Gaussian activity profiles to model long-range apparent motion percepts. We propose a neural model constrained by physiological data to explain how signals from retinal ganglion cells to V1 affect the perception of motion as a function of eccentricity. Our model incorporates cortical magnification, receptive field overlap and scatter, and spatial and temporal response characteristics of retinal ganglion cells for cortical processing of motion. Consistent with the finding of Baker and Braddick (1985), in our model the maximum flash distance that is perceived as an apparent motion (Dmax) increases linearly as a function of eccentricity. Baker and Braddick (1985) made qualitative predictions about the functional significance of both stimulus and visual system parameters that constrain motion perception, such as an increase in the range of detectable motions as a function of eccentricity and the likely role of higher visual processes in determining Dmax. We generate corresponding quantitative predictions for those functional dependencies for individual aspects of motion processing. Simulation results indicate that the early visual pathway can explain the qualitative linear increase of Dmax data without reliance on extrastriate areas, but that those higher visual areas may serve as a modulatory influence on the exact Dmax increase.

  19. A Rotational Motion Perception Neural Network Based on Asymmetric Spatiotemporal Visual Information Processing.

    PubMed

    Hu, Bin; Yue, Shigang; Zhang, Zhuhong

    All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.

  20. The effect of concurrent hand movement on estimated time to contact in a prediction motion task.

    PubMed

    Zheng, Ran; Maraj, Brian K V

    2018-04-27

    In many activities, we need to predict the arrival of an occluded object. This action is called prediction motion or motion extrapolation. Previous researchers have found that both eye tracking and the internal clocking model are involved in the prediction motion task. Additionally, it is reported that concurrent hand movement facilitates the eye tracking of an externally generated target in a tracking task, even if the target is occluded. The present study examined the effect of concurrent hand movement on the estimated time to contact in a prediction motion task. We found different (accurate/inaccurate) concurrent hand movements had the opposite effect on the eye tracking accuracy and estimated TTC in the prediction motion task. That is, the accurate concurrent hand tracking enhanced eye tracking accuracy and had the trend to increase the precision of estimated TTC, but the inaccurate concurrent hand tracking decreased eye tracking accuracy and disrupted estimated TTC. However, eye tracking accuracy does not determine the precision of estimated TTC.

  1. Effects of Frequency and Motion Paradigm on Perception of Tilt and Translation During Periodic Linear Acceleration

    NASA Technical Reports Server (NTRS)

    Beaton, K. H.; Holly, J. E.; Clement, G. R.; Wood, Scott J.

    2009-01-01

    Previous studies have demonstrated an effect of frequency on the gain of tilt and translation perception. Results from different motion paradigms are often combined to extend the stimulus frequency range. For example, Off-Vertical Axis Rotation (OVAR) and Variable Radius Centrifugation (VRC) are useful to test low frequencies of linear acceleration at amplitudes that would require impractical sled lengths. The purpose of this study was to compare roll-tilt and lateral translation motion perception in 12 healthy subjects across four paradigms: OVAR, VRC, sled translation and rotation about an earth-horizontal axis. Subjects were oscillated in darkness at six frequencies from 0.01875 to 0.6 Hz (peak acceleration equivalent to 10 deg, less for sled motion below 0.15 Hz). Subjects verbally described the amplitude of perceived tilt and translation, and used a joystick to indicate the direction of motion. Consistent with previous reports, tilt perception gain decreased as a function of stimulus frequency in the motion paradigms without concordant canal tilt cues (OVAR, VRC and Sled). Translation perception gain was negligible at low stimulus frequencies and increased at higher frequencies. There were no significant differences between the phase of tilt and translation, nor did the phase significantly vary across stimulus frequency. There were differences in perception gain across the different paradigms. Paradigms that included actual tilt stimuli had the larger tilt gains, and paradigms that included actual translation stimuli had larger translation gains. In addition, the frequency at which there was a crossover of tilt and translation gains appeared to vary across motion paradigm between 0.15 and 0.3 Hz. Since the linear acceleration in the head lateral plane was equivalent across paradigms, differences in gain may be attributable to the presence of linear accelerations in orthogonal directions and/or cognitive aspects based on the expected motion paths.

  2. Perceptions of Classroom Assessment Tasks: An Interplay of Gender, Subject Area, and Grade Level

    ERIC Educational Resources Information Center

    Alkharusi, Hussain Ali; Al-Hosni, Salim

    2015-01-01

    This study investigates students' perceptions of classroom assessment tasks as a function of gender, subject area, and grade level. Data from 2753 students on Dorman and Knightley's (2006) Perceptions of Assessment Tasks Inventory (PATI) were analyzed in a MANOVA design. Results showed that students tended to hold positive perceptions of their…

  3. Object motion perception is shaped by the motor control mechanism of ocular pursuit.

    PubMed

    Schweigart, G; Mergner, T; Barnes, G R

    2003-02-01

    It is still a matter of debate whether the control of smooth pursuit eye movements involves an internal drive signal from object motion perception. We measured human target velocity and target position perceptions and compared them with the presumed pursuit control mechanism (model simulations). We presented normal subjects (Ns) and vestibular loss patients (Ps) with visual target motion in space. Concurrently, a visual background was presented, which was kept stationary or was moved with or against the target (five combinations). The motion stimuli consisted of smoothed ramp displacements with different dominant frequencies and peak velocities (0.05, 0.2, 0.8 Hz; 0.2-25.6 degrees /s). Subjects always pursued the target with their eyes. In a first experiment they gave verbal magnitude estimates of perceived target velocity in space and of self-motion in space. The target velocity estimates of both Ns and Ps tended to saturate at 0.8 Hz and with peak velocities >3 degrees /s. Below these ranges the velocity estimates showed a pronounced modulation in relation to the relative target-to-background motion ('background effect'; for example, 'background with'-motion decreased and 'against'-motion increased perceived target velocity). Pronounced only in Ps and not in Ns, there was an additional modulation in relation to the relative head-to-background motion, which co-varied with an illusion of self-motion in space (circular vection, CV) in Ps. In a second experiment, subjects performed retrospective reproduction of perceived target start and end positions with the same stimuli. Perceived end position was essentially veridical in both Ns and Ps (apart from a small constant offset). Reproduced start position showed an almost negligible background effect in Ns. In contrast, it showed a pronounced modulation in Ps, which again was related to CV. The results were compared with simulations of a model that we have recently presented for velocity control of eye pursuit. We found that the main features of target velocity perception (in terms of dynamics and modulation by background) closely correspond to those of the internal drive signal for target pursuit, compatible with the notion of a common source of both the perception and the drive signal. In contrast, the eye pursuit movement is almost free of the background effect. As an explanation, we postulate that the target-to-background component in the target pursuit drive signal largely neutralises the background-to-eye retinal slip signal (optokinetic reflex signal) that feeds into the eye premotor mechanism as a competitor of the target retinal slip signal. An extension of the model allowed us to simulate also the findings of the target position perception. It is assumed to be represented in a perceptual channel that is distinct from the velocity perception, building on an efference copy of the essentially accurate eye position. We hold that other visuomotor behaviour, such as target reaching with the hand, builds mainly on this target position percept and therefore is not contaminated by the background effect in the velocity percept. Generally, the coincidence of an erroneous velocity percept and an almost perfect eye pursuit movement during background motion is discussed as an instructive example of an action-perception dissociation. This dissociation cannot be taken to indicate that the two functions are internally represented in separate brain control systems, but rather reflects the intimate coupling between both functions.

  4. Spatiotemporal Processing in Crossmodal Interactions for Perception of the External World: A Review

    PubMed Central

    Hidaka, Souta; Teramoto, Wataru; Sugita, Yoichi

    2015-01-01

    Research regarding crossmodal interactions has garnered much interest in the last few decades. A variety of studies have demonstrated that multisensory information (vision, audition, tactile sensation, and so on) can perceptually interact with each other in the spatial and temporal domains. Findings regarding crossmodal interactions in the spatiotemporal domain (i.e., motion processing) have also been reported, with updates in the last few years. In this review, we summarize past and recent findings on spatiotemporal processing in crossmodal interactions regarding perception of the external world. A traditional view regarding crossmodal interactions holds that vision is superior to audition in spatial processing, but audition is dominant over vision in temporal processing. Similarly, vision is considered to have dominant effects over the other sensory modalities (i.e., visual capture) in spatiotemporal processing. However, recent findings demonstrate that sound could have a driving effect on visual motion perception. Moreover, studies regarding perceptual associative learning reported that, after association is established between a sound sequence without spatial information and visual motion information, the sound sequence could trigger visual motion perception. Other sensory information, such as motor action or smell, has also exhibited similar driving effects on visual motion perception. Additionally, recent brain imaging studies demonstrate that similar activation patterns could be observed in several brain areas, including the motion processing areas, between spatiotemporal information from different sensory modalities. Based on these findings, we suggest that multimodal information could mutually interact in spatiotemporal processing in the percept of the external world and that common perceptual and neural underlying mechanisms would exist for spatiotemporal processing. PMID:26733827

  5. Object Manipulation and Motion Perception: Evidence of an Influence of Action Planning on Visual Processing

    ERIC Educational Resources Information Center

    Lindemann, Oliver; Bekkering, Harold

    2009-01-01

    In 3 experiments, the authors investigated the bidirectional coupling of perception and action in the context of object manipulations and motion perception. Participants prepared to grasp an X-shaped object along one of its 2 diagonals and to rotate it in a clockwise or a counterclockwise direction. Action execution had to be delayed until the…

  6. Altered perception of apparent motion in schizophrenia spectrum disorder.

    PubMed

    Tschacher, Wolfgang; Dubouloz, Priscilla; Meier, Rahel; Junghan, Uli

    2008-06-30

    Apparent motion (AM), the Gestalt perception of motion in the absence of physical motion, was used to study perceptual organization and neurocognitive binding in schizophrenia. Associations between AM perception and psychopathology as well as meaningful subgroups were sought. Circular and stroboscopic AM stimuli were presented to 68 schizophrenia spectrum patients and healthy participants. Psychopathology was measured using the Positive and Negative Syndrome Scale (PANSS). Psychopathology was related to AM perception differentially: Positive and disorganization symptoms were linked to reduced gestalt stability; negative symptoms, excitement and depression had opposite regression weights. Dimensions of psychopathology thus have opposing effects on gestalt perception. It was generally found that AM perception was closely associated with psychopathology. No difference existed between patients and controls, but two latent classes were found. Class A members who had low levels of AM stability made up the majority of inpatients and control subjects; such participants were generally young and male, with short reaction times. Class B typically contained outpatients and some control subjects; participants in class B were older and showed longer reaction times. Hence AM perceptual dysfunctions are not specific for schizophrenia, yet AM may be a promising stage marker.

  7. A closed-loop neurobotic system for fine touch sensing

    NASA Astrophysics Data System (ADS)

    Bologna, L. L.; Pinoteau, J.; Passot, J.-B.; Garrido, J. A.; Vogel, J.; Ros Vidal, E.; Arleo, A.

    2013-08-01

    Objective. Fine touch sensing relies on peripheral-to-central neurotransmission of somesthetic percepts, as well as on active motion policies shaping tactile exploration. This paper presents a novel neuroengineering framework for robotic applications based on the multistage processing of fine tactile information in the closed action-perception loop. Approach. The integrated system modules focus on (i) neural coding principles of spatiotemporal spiking patterns at the periphery of the somatosensory pathway, (ii) probabilistic decoding mechanisms mediating cortical-like tactile recognition and (iii) decision-making and low-level motor adaptation underlying active touch sensing. We probed the resulting neural architecture through a Braille reading task. Main results. Our results on the peripheral encoding of primary contact features are consistent with experimental data on human slow-adapting type I mechanoreceptors. They also suggest second-order processing by cuneate neurons may resolve perceptual ambiguities, contributing to a fast and highly performing online discrimination of Braille inputs by a downstream probabilistic decoder. The implemented multilevel adaptive control provides robustness to motion inaccuracy, while making the number of finger accelerations covariate with Braille character complexity. The resulting modulation of fingertip kinematics is coherent with that observed in human Braille readers. Significance. This work provides a basis for the design and implementation of modular neuromimetic systems for fine touch discrimination in robotics.

  8. Perception of 3D spatial relations for 3D displays

    NASA Astrophysics Data System (ADS)

    Rosen, Paul; Pizlo, Zygmunt; Hoffmann, Christoph; Popescu, Voicu S.

    2004-05-01

    We test perception of 3D spatial relations in 3D images rendered by a 3D display (Perspecta from Actuality Systems) and compare it to that of a high-resolution flat panel display. 3D images provide the observer with such depth cues as motion parallax and binocular disparity. Our 3D display is a device that renders a 3D image by displaying, in rapid succession, radial slices through the scene on a rotating screen. The image is contained in a glass globe and can be viewed from virtually any direction. In the psychophysical experiment several families of 3D objects are used as stimuli: primitive shapes (cylinders and cuboids), and complex objects (multi-story buildings, cars, and pieces of furniture). Each object has at least one plane of symmetry. On each trial an object or its "distorted" version is shown at an arbitrary orientation. The distortion is produced by stretching an object in a random direction by 40%. This distortion must eliminate the symmetry of an object. The subject's task is to decide whether or not the presented object is distorted under several viewing conditions (monocular/binocular, with/without motion parallax, and near/far). The subject's performance is measured by the discriminability d', which is a conventional dependent variable in signal detection experiments.

  9. Individual differences in visual motion perception and neurotransmitter concentrations in the human brain.

    PubMed

    Takeuchi, Tatsuto; Yoshimoto, Sanae; Shimada, Yasuhiro; Kochiyama, Takanori; Kondo, Hirohito M

    2017-02-19

    Recent studies have shown that interindividual variability can be a rich source of information regarding the mechanism of human visual perception. In this study, we examined the mechanisms underlying interindividual variability in the perception of visual motion, one of the fundamental components of visual scene analysis, by measuring neurotransmitter concentrations using magnetic resonance spectroscopy. First, by psychophysically examining two types of motion phenomena-motion assimilation and contrast-we found that, following the presentation of the same stimulus, some participants perceived motion assimilation, while others perceived motion contrast. Furthermore, we found that the concentration of the excitatory neurotransmitter glutamate-glutamine (Glx) in the dorsolateral prefrontal cortex (Brodmann area 46) was positively correlated with the participant's tendency to motion assimilation over motion contrast; however, this effect was not observed in the visual areas. The concentration of the inhibitory neurotransmitter γ-aminobutyric acid had only a weak effect compared with that of Glx. We conclude that excitatory process in the suprasensory area is important for an individual's tendency to determine antagonistically perceived visual motion phenomena.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).

  10. Experimental measurements of motion cue effects on STOL approach tasks

    NASA Technical Reports Server (NTRS)

    Ringland, R. F.; Stapleford, R. L.

    1972-01-01

    An experimental program to investigate the effects of motion cues on STOL approach is presented. The simulator used was the Six-Degrees-of-Freedom Motion Simulator (S.01) at Ames Research Center of NASA which has ?2.7 m travel longitudinally and laterally and ?2.5 m travel vertically. Three major experiments, characterized as tracking tasks, were conducted under fixed and moving base conditions: (1) A simulated IFR approach of the Augmentor Wing Jet STOL Research Aircraft (AWJSRA), (2) a simulated VFR task with the same aircraft, and (3) a single-axis task having only linear acceleration as the motion cue. Tracking performance was measured in terms of the variances of several motion variables, pilot vehicle describing functions, and pilot commentary.

  11. Predicting Students' Academic Achievement: Contributions of Perceptions of Classroom Assessment Tasks and Motivated Learning Strategies

    ERIC Educational Resources Information Center

    Alkharusi, Hussain

    2016-01-01

    Introduction: Students are daily exposed to a variety of assessment tasks in the classroom. It has long been recognized that students' perceptions of the assessment tasks may influence student academic achievement. The present study aimed at predicting academic achievement in mathematics from perceptions of the assessment tasks after controlling…

  12. A Generalized-Compliant-Motion Primitive

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.

    1993-01-01

    Computer program bridges gap between planning and execution of compliant robotic motions developed and installed in control system of telerobot. Called "generalized-compliant-motion primitive," one of several task-execution-primitive computer programs, which receives commands from higher-level task-planning programs and executes commands by generating required trajectories and applying appropriate control laws. Program comprises four parts corresponding to nominal motion, compliant motion, ending motion, and monitoring. Written in C language.

  13. Neuroimaging Evidence for 2 Types of Plasticity in Association with Visual Perceptual Learning.

    PubMed

    Shibata, Kazuhisa; Sasaki, Yuka; Kawato, Mitsuo; Watanabe, Takeo

    2016-09-01

    Visual perceptual learning (VPL) is long-term performance improvement as a result of perceptual experience. It is unclear whether VPL is associated with refinement in representations of the trained feature (feature-based plasticity), improvement in processing of the trained task (task-based plasticity), or both. Here, we provide empirical evidence that VPL of motion detection is associated with both types of plasticity which occur predominantly in different brain areas. Before and after training on a motion detection task, subjects' neural responses to the trained motion stimuli were measured using functional magnetic resonance imaging. In V3A, significant response changes after training were observed specifically to the trained motion stimulus but independently of whether subjects performed the trained task. This suggests that the response changes in V3A represent feature-based plasticity in VPL of motion detection. In V1 and the intraparietal sulcus, significant response changes were found only when subjects performed the trained task on the trained motion stimulus. This suggests that the response changes in these areas reflect task-based plasticity. These results collectively suggest that VPL of motion detection is associated with the 2 types of plasticity, which occur in different areas and therefore have separate mechanisms at least to some degree. © The Author 2016. Published by Oxford University Press.

  14. Automatically Characterizing Sensory-Motor Patterns Underlying Reach-to-Grasp Movements on a Physical Depth Inversion Illusion.

    PubMed

    Nguyen, Jillian; Majmudar, Ushma V; Ravaliya, Jay H; Papathomas, Thomas V; Torres, Elizabeth B

    2015-01-01

    Recently, movement variability has been of great interest to motor control physiologists as it constitutes a physical, quantifiable form of sensory feedback to aid in planning, updating, and executing complex actions. In marked contrast, the psychological and psychiatric arenas mainly rely on verbal descriptions and interpretations of behavior via observation. Consequently, a large gap exists between the body's manifestations of mental states and their descriptions, creating a disembodied approach in the psychological and neural sciences: contributions of the peripheral nervous system to central control, executive functions, and decision-making processes are poorly understood. How do we shift from a psychological, theorizing approach to characterize complex behaviors more objectively? We introduce a novel, objective, statistical framework, and visuomotor control paradigm to help characterize the stochastic signatures of minute fluctuations in overt movements during a visuomotor task. We also quantify a new class of covert movements that spontaneously occur without instruction. These are largely beneath awareness, but inevitably present in all behaviors. The inclusion of these motions in our analyses introduces a new paradigm in sensory-motor integration. As it turns out, these movements, often overlooked as motor noise, contain valuable information that contributes to the emergence of different kinesthetic percepts. We apply these new methods to help better understand perception-action loops. To investigate how perceptual inputs affect reach behavior, we use a depth inversion illusion (DII): the same physical stimulus produces two distinct depth percepts that are nearly orthogonal, enabling a robust comparison of competing percepts. We find that the moment-by-moment empirically estimated motor output variability can inform us of the participants' perceptual states, detecting physiologically relevant signals from the peripheral nervous system that reveal internal mental states evoked by the bi-stable illusion. Our work proposes a new statistical platform to objectively separate changes in visual perception by quantifying the unfolding of movement, emphasizing the importance of including in the motion analyses all overt and covert aspects of motor behavior.

  15. Vestibular signals in primate cortex for self-motion perception.

    PubMed

    Gu, Yong

    2018-04-21

    The vestibular peripheral organs in our inner ears detect transient motion of the head in everyday life. This information is sent to the central nervous system for automatic processes such as vestibulo-ocular reflexes, balance and postural control, and higher cognitive functions including perception of self-motion and spatial orientation. Recent neurophysiological studies have discovered a prominent vestibular network in the primate cerebral cortex. Many of the areas involved are multisensory: their neurons are modulated by both vestibular signals and visual optic flow, potentially facilitating more robust heading estimation through cue integration. Combining psychophysics, computation, physiological recording and causal manipulation techniques, recent work has addressed both the encoding and decoding of vestibular signals for self-motion perception. Copyright © 2018. Published by Elsevier Ltd.

  16. The Correlation between Gifted Students' Cost and Task Value Perceptions towards Mathematics: The Mediating Role of Expectancy Belief

    ERIC Educational Resources Information Center

    Kurnaz, Ahmet

    2018-01-01

    In this study whether the expectancy belief has a mediating role in the correlation between cost value perception and task value perception of gifted students towards mathematics was examined. It is predicted that the correlation between cost value and task value perceptions of gifted students towards mathematics can change according to their…

  17. Motion sickness severity and physiological correlates during repeated exposures to a rotating optokinetic drum

    NASA Technical Reports Server (NTRS)

    Hu, Senqi; Grant, Wanda F.; Stern, Robert M.; Koch, Kenneth L.

    1991-01-01

    Fifty-two subjects were exposed to a rotating optokinetic drum. Ten of these subjects who became motion sick during the first session completed two additional sessions. Subjects' symptoms of motion sickness, perception of self-motion, electrogastrograms (EGGs), heart rate, mean successive differences of R-R intervals (RRI), and skin conductance were recorded for each session. The results from the first session indicated that the development of motion sickness was accompanied by increased EGG 4-9 cpm activity (gastric tachyarrhythmia), decreased mean succesive differences of RRI, increased skin conductance levels, and increased self-motion perception. The results from the subjects who had three repeated sessions showed that 4-9 cpm EGG activity, skin conductance levels, perception of self-motion, and symptoms of motion sickness all increased significantly during the drum rotation period of the first session, but increased significantly less during the following sessions. Mean successive differences of RRI decreased significantly during the drum rotation period for the first session, but decreased significantly less during the following sessions. Results show that the development of motion sickness is accompanied by an increase in gastric tachyarrhythmia, and an increase in sympathetic activity and a decrease in parasympathetic activity, and that adaptation to motion sickness is accompanied by the recovery of autonomic nervous system balance.

  18. Tuning self-motion perception in virtual reality with visual illusions.

    PubMed

    Bruder, Gerd; Steinicke, Frank; Wieland, Phil; Lappe, Markus

    2012-07-01

    Motion perception in immersive virtual environments significantly differs from the real world. For example, previous work has shown that users tend to underestimate travel distances in virtual environments (VEs). As a solution to this problem, researchers proposed to scale the mapped virtual camera motion relative to the tracked real-world movement of a user until real and virtual motion are perceived as equal, i.e., real-world movements could be mapped with a larger gain to the VE in order to compensate for the underestimation. However, introducing discrepancies between real and virtual motion can become a problem, in particular, due to misalignments of both worlds and distorted space cognition. In this paper, we describe a different approach that introduces apparent self-motion illusions by manipulating optic flow fields during movements in VEs. These manipulations can affect self-motion perception in VEs, but omit a quantitative discrepancy between real and virtual motions. In particular, we consider to which regions of the virtual view these apparent self-motion illusions can be applied, i.e., the ground plane or peripheral vision. Therefore, we introduce four illusions and show in experiments that optic flow manipulation can significantly affect users' self-motion judgments. Furthermore, we show that with such manipulations of optic flow fields the underestimation of travel distances can be compensated.

  19. Saccades to future ball location reveal memory-based prediction in a virtual-reality interception task.

    PubMed

    Diaz, Gabriel; Cooper, Joseph; Rothkopf, Constantin; Hayhoe, Mary

    2013-01-16

    Despite general agreement that prediction is a central aspect of perception, there is relatively little evidence concerning the basis on which visual predictions are made. Although both saccadic and pursuit eye-movements reveal knowledge of the future position of a moving visual target, in many of these studies targets move along simple trajectories through a fronto-parallel plane. Here, using a naturalistic and racquet-based interception task in a virtual environment, we demonstrate that subjects make accurate predictions of visual target motion, even when targets follow trajectories determined by the complex dynamics of physical interactions and the head and body are unrestrained. Furthermore, we found that, following a change in ball elasticity, subjects were able to accurately adjust their prebounce predictions of the ball's post-bounce trajectory. This suggests that prediction is guided by experience-based models of how information in the visual image will change over time.

  20. Saccades to future ball location reveal memory-based prediction in a virtual-reality interception task

    PubMed Central

    Diaz, Gabriel; Cooper, Joseph; Rothkopf, Constantin; Hayhoe, Mary

    2013-01-01

    Despite general agreement that prediction is a central aspect of perception, there is relatively little evidence concerning the basis on which visual predictions are made. Although both saccadic and pursuit eye-movements reveal knowledge of the future position of a moving visual target, in many of these studies targets move along simple trajectories through a fronto-parallel plane. Here, using a naturalistic and racquet-based interception task in a virtual environment, we demonstrate that subjects make accurate predictions of visual target motion, even when targets follow trajectories determined by the complex dynamics of physical interactions and the head and body are unrestrained. Furthermore, we found that, following a change in ball elasticity, subjects were able to accurately adjust their prebounce predictions of the ball's post-bounce trajectory. This suggests that prediction is guided by experience-based models of how information in the visual image will change over time. PMID:23325347

  1. HERMIES-3: A step toward autonomous mobility, manipulation, and perception

    NASA Technical Reports Server (NTRS)

    Weisbin, C. R.; Burks, B. L.; Einstein, J. R.; Feezell, R. R.; Manges, W. W.; Thompson, D. H.

    1989-01-01

    HERMIES-III is an autonomous robot comprised of a seven degree-of-freedom (DOF) manipulator designed for human scale tasks, a laser range finder, a sonar array, an omni-directional wheel-driven chassis, multiple cameras, and a dual computer system containing a 16-node hypercube expandable to 128 nodes. The current experimental program involves performance of human-scale tasks (e.g., valve manipulation, use of tools), integration of a dexterous manipulator and platform motion in geometrically complex environments, and effective use of multiple cooperating robots (HERMIES-IIB and HERMIES-III). The environment in which the robots operate has been designed to include multiple valves, pipes, meters, obstacles on the floor, valves occluded from view, and multiple paths of differing navigation complexity. The ongoing research program supports the development of autonomous capability for HERMIES-IIB and III to perform complex navigation and manipulation under time constraints, while dealing with imprecise sensory information.

  2. Relation of motion sickness susceptibility to vestibular and behavioral measures of orientation

    NASA Technical Reports Server (NTRS)

    Peterka, Robert J.

    1994-01-01

    The objective of this proposal is to determine the relationship of motion sickness susceptibility to vestibulo-ocular reflexes (VOR), motion perception, and behavioral utilization of sensory orientation cues for the control of postural equilibrium. The work is focused on reflexes and motion perception associated with pitch and roll movements that stimulate the vertical semicircular canals and otolith organs of the inner ear. This work is relevant to the space motion sickness problem since 0 g related sensory conflicts between vertical canal and otolith motion cues are a likely cause of space motion sickness. Results of experimentation are summarized and modifications to a two-axis rotation device are described. Abstracts of a number of papers generated during the reporting period are appended.

  3. Relevance of motion-related assessment metrics in laparoscopic surgery.

    PubMed

    Oropesa, Ignacio; Chmarra, Magdalena K; Sánchez-González, Patricia; Lamata, Pablo; Rodrigues, Sharon P; Enciso, Silvia; Sánchez-Margallo, Francisco M; Jansen, Frank-Willem; Dankelman, Jenny; Gómez, Enrique J

    2013-06-01

    Motion metrics have become an important source of information when addressing the assessment of surgical expertise. However, their direct relationship with the different surgical skills has not been fully explored. The purpose of this study is to investigate the relevance of motion-related metrics in the evaluation processes of basic psychomotor laparoscopic skills and their correlation with the different abilities sought to measure. A framework for task definition and metric analysis is proposed. An explorative survey was first conducted with a board of experts to identify metrics to assess basic psychomotor skills. Based on the output of that survey, 3 novel tasks for surgical assessment were designed. Face and construct validation was performed, with focus on motion-related metrics. Tasks were performed by 42 participants (16 novices, 22 residents, and 4 experts). Movements of the laparoscopic instruments were registered with the TrEndo tracking system and analyzed. Time, path length, and depth showed construct validity for all 3 tasks. Motion smoothness and idle time also showed validity for tasks involving bimanual coordination and tasks requiring a more tactical approach, respectively. Additionally, motion smoothness and average speed showed a high internal consistency, proving them to be the most task-independent of all the metrics analyzed. Motion metrics are complementary and valid for assessing basic psychomotor skills, and their relevance depends on the skill being evaluated. A larger clinical implementation, combined with quality performance information, will give more insight on the relevance of the results shown in this study.

  4. Caring more and knowing more reduces age-related differences in emotion perception.

    PubMed

    Stanley, Jennifer Tehan; Isaacowitz, Derek M

    2015-06-01

    Traditional emotion perception tasks show that older adults are less accurate than are young adults at recognizing facial expressions of emotion. Recently, we proposed that socioemotional factors might explain why older adults seem impaired in lab tasks but less so in everyday life (Isaacowitz & Stanley, 2011). Thus, in the present research we empirically tested whether socioemotional factors such as motivation and familiarity can alter this pattern of age effects. In 1 task, accountability instructions eliminated age differences in the traditional emotion perception task. Using a novel emotion perception paradigm featuring spontaneous dynamic facial expressions of a familiar romantic partner versus a same-age stranger, we found that age differences in emotion perception accuracy were attenuated in the familiar partner condition, relative to the stranger condition. Taken together, the results suggest that both overall accuracy as well as specific patterns of age effects differ appreciably between traditional emotion perception tasks and emotion perception within a socioemotional context. (c) 2015 APA, all rights reserved.

  5. Connecting Athletes’ Self-Perceptions and Metaperceptions of Competence: a Structural Equation Modeling Approach

    PubMed Central

    Cecchini, Jose A.; Fernández-Rio, Javier; Méndez-Giménez, Antonio

    2015-01-01

    This study explored the relationships between athletes’ competence self-perceptions and metaperceptions. Two hundred and fifty one student-athletes (14.26 ± 1.89 years), members of twenty different teams (basketball, soccer) completed a questionnaire which included the Perception of Success Questionnaire, the Competence subscale of the Intrinsic Motivation Inventory, and modified versions of both questionnaires to assess athletes’ metaperceptions. Structural equation modelling analysis revealed that athletes’ task and ego metaperceptions positively predicted task and ego self-perceptions, respectively. Competence metaperceptions were strong predictors of competence self-perceptions, confirming the atypical metaperception formation in outcome-dependent contexts such as sport. Task and ego metaperceptions positively predicted athletes’ competence metaperceptions. How coaches value their athletes’ competence is more influential on what the athletes think of themselves than their own self-perceptions. Athletes’ ego and task metaperceptions influenced their competence metaperceptions (how coaches rate their competence). Therefore, athletes build their competence metaperceptions using all information available from their coaches. Finally, only task-self perfections positively predicted athletes’ competence self-perceptions. PMID:26240662

  6. Caring More and Knowing More Reduces Age-Related Differences in Emotion Perception

    PubMed Central

    Stanley, Jennifer Tehan; Isaacowitz, Derek M.

    2015-01-01

    Traditional emotion perception tasks show that older adults are less accurate than young adults at recognizing facial expressions of emotion. Recently, we proposed that socioemotional factors might explain why older adults seem impaired in lab tasks but less so in everyday life (Isaacowitz & Stanley, 2011). Thus, in the present research we empirically tested whether socioemotional factors such as motivation and familiarity can alter this pattern of age effects. In one task, accountability instructions eliminated age differences in the traditional emotion perception task. Using a novel emotion perception paradigm featuring spontaneous dynamic facial expressions of a familiar romantic partner versus a same-age stranger, we found that age differences in emotion perception accuracy were attenuated in the familiar partner condition, relative to the stranger condition. Taken together, the results suggest that both overall accuracy as well as specific patterns of age effects differ appreciably between traditional emotion perception tasks and emotion perception within a socioemotional context. PMID:26030775

  7. Lasers' spectral and temporal profile can affect visual glare disability.

    PubMed

    Beer, Jeremy M A; Freeman, David A

    2012-12-01

    Experiments measured the effects of laser glare on visual orientation and motion perception. Laser stimuli were varied according to spectral composition and temporal presentation as subjects identified targets' tilt (Experiment 1) and movement (Experiment 2). The objective was to determine whether the glare parameters would alter visual disruption. Three spectral profiles (monochromatic Green vs. polychromatic White vs. alternating Red-Green) were used to produce a ring of laser glare surrounding a target. Two experiments were performed to measure the minimum contrast required to report target orientation or motion direction. The temporal glare profile was also varied: the ring was illuminated either continuously or discontinuously. Time-averaged luminance of the glare stimuli was matched across all conditions. In both experiments, threshold (deltaL) values were approximately 0.15 log units higher in monochromatic Green than in polychromatic White conditions. In Experiment 2 (motion identification), thresholds were approximately 0.17 log units higher in rapidly flashing (6, 10, or 14 Hz) than in continuous exposure conditions. Monochromatic extended-source laser glare disrupted orientation and motion identification more than polychromatic glare. In the motion task, pulse trains faster than 6 Hz (but below flicker fusion) elevated thresholds more than continuous glare with the same time-averaged luminance. Under these conditions, alternating the wavelength of monochromatic glare over time did not aggravate disability relative to green-only glare. Repetitively flashing monochromatic laser glare induced occasional episodes of impaired motion identification, perhaps resulting from cognitive interference. Interference speckle might play a role in aggravating monochromatic glare effects.

  8. Spatial reference frames of visual, vestibular, and multimodal heading signals in the dorsal subdivision of the medial superior temporal area.

    PubMed

    Fetsch, Christopher R; Wang, Sentao; Gu, Yong; Deangelis, Gregory C; Angelaki, Dora E

    2007-01-17

    Heading perception is a complex task that generally requires the integration of visual and vestibular cues. This sensory integration is complicated by the fact that these two modalities encode motion in distinct spatial reference frames (visual, eye-centered; vestibular, head-centered). Visual and vestibular heading signals converge in the primate dorsal subdivision of the medial superior temporal area (MSTd), a region thought to contribute to heading perception, but the reference frames of these signals remain unknown. We measured the heading tuning of MSTd neurons by presenting optic flow (visual condition), inertial motion (vestibular condition), or a congruent combination of both cues (combined condition). Static eye position was varied from trial to trial to determine the reference frame of tuning (eye-centered, head-centered, or intermediate). We found that tuning for optic flow was predominantly eye-centered, whereas tuning for inertial motion was intermediate but closer to head-centered. Reference frames in the two unimodal conditions were rarely matched in single neurons and uncorrelated across the population. Notably, reference frames in the combined condition varied as a function of the relative strength and spatial congruency of visual and vestibular tuning. This represents the first investigation of spatial reference frames in a naturalistic, multimodal condition in which cues may be integrated to improve perceptual performance. Our results compare favorably with the predictions of a recent neural network model that uses a recurrent architecture to perform optimal cue integration, suggesting that the brain could use a similar computational strategy to integrate sensory signals expressed in distinct frames of reference.

  9. Perception of Elasticity in the Kinetic Illusory Object with Phase Differences in Inducer Motion

    PubMed Central

    Masuda, Tomohiro; Sato, Kazuki; Murakoshi, Takuma; Utsumi, Ken; Kimura, Atsushi; Shirai, Nobu; Kanazawa, So; Yamaguchi, Masami K.; Wada, Yuji

    2013-01-01

    Background It is known that subjective contours are perceived even when a figure involves motion. However, whether this includes the perception of rigidity or deformation of an illusory surface remains unknown. In particular, since most visual stimuli used in previous studies were generated in order to induce illusory rigid objects, the potential perception of material properties such as rigidity or elasticity in these illusory surfaces has not been examined. Here, we elucidate whether the magnitude of phase difference in oscillation influences the visual impressions of an object's elasticity (Experiment 1) and identify whether such elasticity perceptions are accompanied by the shape of the subjective contours, which can be assumed to be strongly correlated with the perception of rigidity (Experiment 2). Methodology/Principal Findings In Experiment 1, the phase differences in the oscillating motion of inducers were controlled to investigate whether they influenced the visual impression of an illusory object's elasticity. The results demonstrated that the impression of the elasticity of an illusory surface with subjective contours was systematically flipped with the degree of phase difference. In Experiment 2, we examined whether the subjective contours of a perceived object appeared linear or curved using multi-dimensional scaling analysis. The results indicated that the contours of a moving illusory object were perceived as more curved than linear in all phase-difference conditions. Conclusions/Significance These findings suggest that the phase difference in an object's motion is a significant factor in the material perception of motion-related elasticity. PMID:24205281

  10. Priming with real motion biases visual cortical response to bistable apparent motion

    PubMed Central

    Zhang, Qing-fang; Wen, Yunqing; Zhang, Deng; She, Liang; Wu, Jian-young; Dan, Yang; Poo, Mu-ming

    2012-01-01

    Apparent motion quartet is an ambiguous stimulus that elicits bistable perception, with the perceived motion alternating between two orthogonal paths. In human psychophysical experiments, the probability of perceiving motion in each path is greatly enhanced by a brief exposure to real motion along that path. To examine the neural mechanism underlying this priming effect, we used voltage-sensitive dye (VSD) imaging to measure the spatiotemporal activity in the primary visual cortex (V1) of awake mice. We found that a brief real motion stimulus transiently biased the cortical response to subsequent apparent motion toward the spatiotemporal pattern representing the real motion. Furthermore, intracellular recording from V1 neurons in anesthetized mice showed a similar increase in subthreshold depolarization in the neurons representing the path of real motion. Such short-term plasticity in early visual circuits may contribute to the priming effect in bistable visual perception. PMID:23188797

  11. Stimulation of PPC Affects the Mapping between Motion and Force Signals for Stiffness Perception But Not Motion Control

    PubMed Central

    Mawase, Firas; Karniel, Amir; Donchin, Opher; Rothwell, John; Nisky, Ilana; Davare, Marco

    2016-01-01

    How motion and sensory inputs are combined to assess an object's stiffness is still unknown. Here, we provide evidence for the existence of a stiffness estimator in the human posterior parietal cortex (PPC). We showed previously that delaying force feedback with respect to motion when interacting with an object caused participants to underestimate its stiffness. We found that applying theta-burst transcranial magnetic stimulation (TMS) over the PPC, but not the dorsal premotor cortex, enhances this effect without affecting movement control. We explain this enhancement as an additional lag in force signals. This is the first causal evidence that the PPC is not only involved in motion control, but also has an important role in perception that is disassociated from action. We provide a computational model suggesting that the PPC integrates position and force signals for perception of stiffness and that TMS alters the synchronization between the two signals causing lasting consequences on perceptual behavior. SIGNIFICANCE STATEMENT When selecting an object such as a ripe fruit or sofa, we need to assess the object's stiffness. Because we lack dedicated stiffness sensors, we rely on an as yet unknown mechanism that generates stiffness percepts by combining position and force signals. Here, we found that the posterior parietal cortex (PPC) contributes to combining position and force signals for stiffness estimation. This finding challenges the classical view about the role of the PPC in regulating position signals only for motion control because we highlight a key role of the PPC in perception that is disassociated from action. Altogether this sheds light on brain mechanisms underlying the interaction between action and perception and may help in the development of better teleoperation systems and rehabilitation of patients with sensory impairments. PMID:27733607

  12. Stimulation of PPC Affects the Mapping between Motion and Force Signals for Stiffness Perception But Not Motion Control.

    PubMed

    Leib, Raz; Mawase, Firas; Karniel, Amir; Donchin, Opher; Rothwell, John; Nisky, Ilana; Davare, Marco

    2016-10-12

    How motion and sensory inputs are combined to assess an object's stiffness is still unknown. Here, we provide evidence for the existence of a stiffness estimator in the human posterior parietal cortex (PPC). We showed previously that delaying force feedback with respect to motion when interacting with an object caused participants to underestimate its stiffness. We found that applying theta-burst transcranial magnetic stimulation (TMS) over the PPC, but not the dorsal premotor cortex, enhances this effect without affecting movement control. We explain this enhancement as an additional lag in force signals. This is the first causal evidence that the PPC is not only involved in motion control, but also has an important role in perception that is disassociated from action. We provide a computational model suggesting that the PPC integrates position and force signals for perception of stiffness and that TMS alters the synchronization between the two signals causing lasting consequences on perceptual behavior. When selecting an object such as a ripe fruit or sofa, we need to assess the object's stiffness. Because we lack dedicated stiffness sensors, we rely on an as yet unknown mechanism that generates stiffness percepts by combining position and force signals. Here, we found that the posterior parietal cortex (PPC) contributes to combining position and force signals for stiffness estimation. This finding challenges the classical view about the role of the PPC in regulating position signals only for motion control because we highlight a key role of the PPC in perception that is disassociated from action. Altogether this sheds light on brain mechanisms underlying the interaction between action and perception and may help in the development of better teleoperation systems and rehabilitation of patients with sensory impairments. Copyright © 2016 Leib et al.

  13. Social coordination in toddler's word learning: interacting systems of perception and action

    NASA Astrophysics Data System (ADS)

    Pereira, Alfredo; Smith, Linda; Yu, Chen

    2008-06-01

    We measured turn-taking in terms of hand and head movements and asked if the global rhythm of the participants' body activity relates to word learning. Six dyads composed of parents and toddlers (M=18 months) interacted in a tabletop task wearing motion-tracking sensors on their hands and head. Parents were instructed to teach the labels of 10 novel objects and the child was later tested on a name-comprehension task. Using dynamic time warping, we compared the motion data of all body-part pairs, within and between partners. For every dyad, we also computed an overall measure of the quality of the interaction, that takes into consideration the state of interaction when the parent uttered an object label and the overall smoothness of the turn-taking. The overall interaction quality measure was correlated with the total number of words learned. In particular, head movements were inversely related to other partner's hand movements, and the degree of bodily coupling of parent and toddler predicted the words that children learned during the interaction. The implications of joint body dynamics to understanding joint coordination of activity in a social interaction, its scaffolding effect on the child's learning and its use in the development of artificial systems are discussed.

  14. Patterns of acute whiplash-associated disorder in the Lithuanian population after road traffic accidents.

    PubMed

    Pajediene, Evelina; Janusauskaite, Jolita; Samusyte, Gintaute; Stasaitis, Kestutis; Petrikonis, Kestutis; Bileviciute-Ljungar, Indre

    2015-01-01

    To investigate acute whiplash-associated disorder in the Lithuanian population who are unaware of the phenomenon. Controlled cohort study. Seventy-one patients were enrolled from the emergency departments of the Kaunas region of Lithuania following road traffic accidents, examined within 3-14 days after the accident, and compared with 53 matched controls. Clinical neurological examination, including range of motion and motion-evoked pain or stiffness in the neck; spontaneous pain and pain pressure threshold. Questionnaires: Quebec Task Force questionnaire (QTFQ); Disability Rating Index (DRI); Cognitive Failures Questionnaire (CFQ); Hospital Anxiety and Depression Scale (HADS) and health perception. Sixty-six of 71 (93%) patients developed acute symptoms. The most frequent symptoms found after road traffic accidents were neck or shoulder pain; reduced or painful neck movements, including decreased range of motion; multiple subjective symptoms according to QTFQ and significantly reduced pain threshold. Perceived health status was decreased and DRI was increased, while HADS showed a significantly higher risk of developing anxiety. Higher grade whiplash-associated disorder was linked with a greater reduction in range of motion and more prominent neck pain. Road traffic accidents induce whiplash-associated disorder in patients who seek help, but who are unaware of the condition whiplash-associated disorder. Whiplash-associated disorder should be considered and treated as an entity per se.

  15. The influence of visual motion on interceptive actions and perception.

    PubMed

    Marinovic, Welber; Plooy, Annaliese M; Arnold, Derek H

    2012-05-01

    Visual information is an essential guide when interacting with moving objects, yet it can also be deceiving. For instance, motion can induce illusory position shifts, such that a moving ball can seem to have bounced past its true point of contact with the ground. Some evidence suggests illusory motion-induced position shifts bias pointing tasks to a greater extent than they do perceptual judgments. This, however, appears at odds with other findings and with our success when intercepting moving objects. Here we examined the accuracy of interceptive movements and of perceptual judgments in relation to simulated bounces. Participants were asked to intercept a moving disc at its bounce location by positioning a virtual paddle, and then to report where the disc had landed. Results showed that interceptive actions were accurate whereas perceptual judgments were inaccurate, biased in the direction of motion. Successful interceptions necessitated accurate information concerning both the location and timing of the bounce, so motor planning evidently had privileged access to an accurate forward model of bounce timing and location. This would explain why people can be accurate when intercepting a moving object, but lack insight into the accurate information that had guided their actions when asked to make a perceptual judgment. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Perception of Motion in Statistically-Defined Displays.

    DTIC Science & Technology

    1988-02-15

    motion encoding (Reichardt, 1961; Barlow and Levick , 1963; van Doorn and Koenderink, 1982a, b ; van de Grind, Koenderink, van Doorn, 1983). A bilocal...stimelu toetini motion onet Lca perception. Psychological Review, 87, 435-469.bo. Barlow H. B . and Levick W. R. (1963) The mechanisms of directionally...REPORT NUMBER(S) 5. MONITORING ORGANIZATIOLA OAT yOSl R 6. NAME OF PERFORMING ORGANIZATION b . OFFICE SYMBOL 7@. NAME OF MONITORING ORGANIZATION (If

  17. Mental Rotation Meets the Motion Aftereffect: The Role of hV5/MT+ in Visual Mental Imagery

    ERIC Educational Resources Information Center

    Seurinck, Ruth; de Lange, Floris P.; Achten, Erik; Vingerhoets, Guy

    2011-01-01

    A growing number of studies show that visual mental imagery recruits the same brain areas as visual perception. Although the necessity of hV5/MT+ for motion perception has been revealed by means of TMS, its relevance for motion imagery remains unclear. We induced a direction-selective adaptation in hV5/MT+ by means of an MAE while subjects…

  18. Perception of Motion in Statistically-Defined Displays

    DTIC Science & Technology

    1989-04-15

    psychophysical study before. He was paid $7.50/hour for his participation. Also, to insure high motivation , he received an additional one cent for every...correct response. This was the same motivational device used in the earlier work on motion discrimination (Ball and Sekuler, 1982). The observer...scientists, physiologists, and people interested in computer vision. Finally, one of the main motives for studying motion perception is a desire to

  19. Walking in the uncanny valley: importance of the attractiveness on the acceptance of a robot as a working partner.

    PubMed

    Destephe, Matthieu; Brandao, Martim; Kishi, Tatsuhiro; Zecca, Massimiliano; Hashimoto, Kenji; Takanishi, Atsuo

    2015-01-01

    The Uncanny valley hypothesis, which tells us that almost-human characteristics in a robot or a device could cause uneasiness in human observers, is an important research theme in the Human Robot Interaction (HRI) field. Yet, that phenomenon is still not well-understood. Many have investigated the external design of humanoid robot faces and bodies but only a few studies have focused on the influence of robot movements on our perception and feelings of the Uncanny valley. Moreover, no research has investigated the possible relation between our uneasiness feeling and whether or not we would accept robots having a job in an office, a hospital or elsewhere. To better understand the Uncanny valley, we explore several factors which might have an influence on our perception of robots, be it related to the subjects, such as culture or attitude toward robots, or related to the robot such as emotions and emotional intensity displayed in its motion. We asked 69 subjects (N = 69) to rate the motions of a humanoid robot (Perceived Humanity, Eeriness, and Attractiveness) and state where they would rather see the robot performing a task. Our results suggest that, among the factors we chose to test, the attitude toward robots is the main influence on the perception of the robot related to the Uncanny valley. Robot occupation acceptability was affected only by Attractiveness, mitigating any Uncanny valley effect. We discuss the implications of these findings for the Uncanny valley and the acceptability of a robotic worker in our society.

  20. Walking in the uncanny valley: importance of the attractiveness on the acceptance of a robot as a working partner

    PubMed Central

    Destephe, Matthieu; Brandao, Martim; Kishi, Tatsuhiro; Zecca, Massimiliano; Hashimoto, Kenji; Takanishi, Atsuo

    2015-01-01

    The Uncanny valley hypothesis, which tells us that almost-human characteristics in a robot or a device could cause uneasiness in human observers, is an important research theme in the Human Robot Interaction (HRI) field. Yet, that phenomenon is still not well-understood. Many have investigated the external design of humanoid robot faces and bodies but only a few studies have focused on the influence of robot movements on our perception and feelings of the Uncanny valley. Moreover, no research has investigated the possible relation between our uneasiness feeling and whether or not we would accept robots having a job in an office, a hospital or elsewhere. To better understand the Uncanny valley, we explore several factors which might have an influence on our perception of robots, be it related to the subjects, such as culture or attitude toward robots, or related to the robot such as emotions and emotional intensity displayed in its motion. We asked 69 subjects (N = 69) to rate the motions of a humanoid robot (Perceived Humanity, Eeriness, and Attractiveness) and state where they would rather see the robot performing a task. Our results suggest that, among the factors we chose to test, the attitude toward robots is the main influence on the perception of the robot related to the Uncanny valley. Robot occupation acceptability was affected only by Attractiveness, mitigating any Uncanny valley effect. We discuss the implications of these findings for the Uncanny valley and the acceptability of a robotic worker in our society. PMID:25762967

  1. Development of Visual Motion Perception for Prospective Control: Brain and Behavioral Studies in Infants

    PubMed Central

    Agyei, Seth B.; van der Weel, F. R. (Ruud); van der Meer, Audrey L. H.

    2016-01-01

    During infancy, smart perceptual mechanisms develop allowing infants to judge time-space motion dynamics more efficiently with age and locomotor experience. This emerging capacity may be vital to enable preparedness for upcoming events and to be able to navigate in a changing environment. Little is known about brain changes that support the development of prospective control and about processes, such as preterm birth, that may compromise it. As a function of perception of visual motion, this paper will describe behavioral and brain studies with young infants investigating the development of visual perception for prospective control. By means of the three visual motion paradigms of occlusion, looming, and optic flow, our research shows the importance of including behavioral data when studying the neural correlates of prospective control. PMID:26903908

  2. Effects of Motion Cues on the Training of Multi-Axis Manual Control Skills

    NASA Technical Reports Server (NTRS)

    Zaal, Peter M. T.; Mobertz, Xander R. I.

    2017-01-01

    The study described in this paper investigated the effects of two different hexapod motion configurations on the training and transfer of training of a simultaneous roll and pitch control task. Pilots were divided between two groups which trained either under a baseline hexapod motion condition, with motion typically provided by current training simulators, or an optimized hexapod motion condition, with increased fidelity of the motion cues most relevant for the task. All pilots transferred to the same full-motion condition, representing motion experienced in flight. A cybernetic approach was used that gave insights into the development of pilots use of visual and motion cues over the course of training and after transfer. Based on the current results, neither of the hexapod motion conditions can unambiguously be chosen as providing the best motion for training and transfer of training of the used multi-axis control task. However, the optimized hexapod motion condition did allow pilots to generate less visual lead, control with higher gains, and have better disturbance-rejection performance at the end of the training session compared to the baseline hexapod motion condition. Significant adaptations in control behavior still occurred in the transfer phase under the full-motion condition for both groups. Pilots behaved less linearly compared to previous single-axis control-task experiments; however, this did not result in smaller motion or learning effects. Motion and learning effects were more pronounced in pitch compared to roll. Finally, valuable lessons were learned that allow us to improve the adopted approach for future transfer-of-training studies.

  3. Asymmetric vestibular stimulation reveals persistent disruption of motion perception in unilateral vestibular lesions.

    PubMed

    Panichi, R; Faralli, M; Bruni, R; Kiriakarely, A; Occhigrossi, C; Ferraresi, A; Bronstein, A M; Pettorossi, V E

    2017-11-01

    Self-motion perception was studied in patients with unilateral vestibular lesions (UVL) due to acute vestibular neuritis at 1 wk and 4, 8, and 12 mo after the acute episode. We assessed vestibularly mediated self-motion perception by measuring the error in reproducing the position of a remembered visual target at the end of four cycles of asymmetric whole-body rotation. The oscillatory stimulus consists of a slow (0.09 Hz) and a fast (0.38 Hz) half cycle. A large error was present in UVL patients when the slow half cycle was delivered toward the lesion side, but minimal toward the healthy side. This asymmetry diminished over time, but it remained abnormally large at 12 mo. In contrast, vestibulo-ocular reflex responses showed a large direction-dependent error only initially, then they normalized. Normalization also occurred for conventional reflex vestibular measures (caloric tests, subjective visual vertical, and head shaking nystagmus) and for perceptual function during symmetric rotation. Vestibular-related handicap, measured with the Dizziness Handicap Inventory (DHI) at 12 mo correlated with self-motion perception asymmetry but not with abnormalities in vestibulo-ocular function. We conclude that 1 ) a persistent self-motion perceptual bias is revealed by asymmetric rotation in UVLs despite vestibulo-ocular function becoming symmetric over time, 2 ) this dissociation is caused by differential perceptual-reflex adaptation to high- and low-frequency rotations when these are combined as with our asymmetric stimulus, 3 ) the findings imply differential central compensation for vestibuloperceptual and vestibulo-ocular reflex functions, and 4 ) self-motion perception disruption may mediate long-term vestibular-related handicap in UVL patients. NEW & NOTEWORTHY A novel vestibular stimulus, combining asymmetric slow and fast sinusoidal half cycles, revealed persistent vestibuloperceptual dysfunction in unilateral vestibular lesion (UVL) patients. The compensation of motion perception after UVL was slower than that of vestibulo-ocular reflex. Perceptual but not vestibulo-ocular reflex deficits correlated with dizziness-related handicap. Copyright © 2017 the American Physiological Society.

  4. Practice-related improvement in working memory is modulated by changes in processing external interference.

    PubMed

    Berry, Anne S; Zanto, Theodore P; Rutman, Aaron M; Clapp, Wesley C; Gazzaley, Adam

    2009-09-01

    Working memory (WM) performance is impaired by the presence of external interference. Accordingly, more efficient processing of intervening stimuli with practice may lead to enhanced WM performance. To explore the role of practice on the impact that interference has on WM performance, we studied young adults with electroencephalographic (EEG) recordings as they performed three motion-direction, delayed-recognition tasks. One task was presented without interference, whereas two tasks introduced different types of interference during the interval of memory maintenance: distractors and interruptors. Distractors were to be ignored, whereas interruptors demanded attention based on task instructions for a perceptual discrimination. We show that WM performance was disrupted by both types of interference, but interference-induced disruption abated across a single experimental session through rapid learning. WM accuracy and response time improved in a manner that was correlated with changes in early neural measures of interference processing in visual cortex (i.e., P1 suppression and N1 enhancement). These results suggest practice-related changes in processing interference exert a positive influence on WM performance, highlighting the importance of filtering irrelevant information and the dynamic interactions that exist between neural processes of perception, attention, and WM during learning.

  5. The phantom robot - Predictive displays for teleoperation with time delay

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.; Kim, Won S.; Venema, Steven C.

    1990-01-01

    An enhanced teleoperation technique for time-delayed bilateral teleoperator control is discussed. The control technique selected for time delay is based on the use of a high-fidelity graphics phantom robot that is being controlled in real time (without time delay) against the static task image. Thus, the motion of the phantom robot image on the monitor predicts the motion of the real robot. The real robot's motion will follow the phantom robot's motion on the monitor with the communication time delay implied in the task. Real-time high-fidelity graphics simulation of a PUMA arm is generated and overlaid on the actual camera view of the arm. A simple camera calibration technique is used for calibrated graphics overlay. A preliminary experiment is performed with the predictive display by using a very simple tapping task. The results with this simple task indicate that predictive display enhances the human operator's telemanipulation task performance significantly during free motion when there is a long time delay. It appears, however, that either two-view or stereoscopic predictive displays are necessary for general three-dimensional tasks.

  6. Developmental study of visual perception of handwriting movement: influence of motor competencies?

    PubMed

    Bidet-Ildei, Christel; Orliaguet, Jean-Pierre

    2008-07-25

    This paper investigates the influence of motor competencies for the visual perception of human movements in 6-10 years old children. To this end, we compared the kinematics of actual performed and perceptual preferred handwriting movements. The two children's tasks were (1) to write the letter e on a digitizer (handwriting task) and (2) to adjust the velocity of an e displayed on a screen so that it would correspond to "their preferred velocity" (perceptive task). In both tasks, the size of the letter (from 3.4 to 54.02 cm) was different on each trial. Results showed that irrespective of age and task, total movement time conforms to the isochrony principle, i.e., the tendency to maintain constant the duration of movement across changes of amplitude. However, concerning movement speed, there is no developmental correspondence between results obtained in the motor and the perceptive tasks. In handwriting task, movement time decreased with age but no effect of age was observed in the perceptive task. Therefore, perceptual preference of handwriting movement in children could not be strictly interpreted in terms of motor-perceptual coupling.

  7. Global motion perception deficits in autism are reflected as early as primary visual cortex

    PubMed Central

    Thomas, Cibu; Kravitz, Dwight J.; Wallace, Gregory L.; Baron-Cohen, Simon; Martin, Alex; Baker, Chris I.

    2014-01-01

    Individuals with autism are often characterized as ‘seeing the trees, but not the forest’—attuned to individual details in the visual world at the expense of the global percept they compose. Here, we tested the extent to which global processing deficits in autism reflect impairments in (i) primary visual processing; or (ii) decision-formation, using an archetypal example of global perception, coherent motion perception. In an event-related functional MRI experiment, 43 intelligence quotient and age-matched male participants (21 with autism, age range 15–27 years) performed a series of coherent motion perception judgements in which the amount of local motion signals available to be integrated into a global percept was varied by controlling stimulus viewing duration (0.2 or 0.6 s) and the proportion of dots moving in the correct direction (coherence: 4%, 15%, 30%, 50%, or 75%). Both typical participants and those with autism evidenced the same basic pattern of accuracy in judging the direction of motion, with performance decreasing with reduced coherence and shorter viewing durations. Critically, these effects were exaggerated in autism: despite equal performance at the long duration, performance was more strongly reduced by shortening viewing duration in autism (P < 0.015) and decreasing stimulus coherence (P < 0.008). To assess the neural correlates of these effects we focused on the responses of primary visual cortex and the middle temporal area, critical in the early visual processing of motion signals, as well as a region in the intraparietal sulcus thought to be involved in perceptual decision-making. The behavioural results were mirrored in both primary visual cortex and the middle temporal area, with a greater reduction in response at short, compared with long, viewing durations in autism compared with controls (both P < 0.018). In contrast, there was no difference between the groups in the intraparietal sulcus (P > 0.574). These findings suggest that reduced global motion perception in autism is driven by an atypical response early in visual processing and may reflect a fundamental perturbation in neural circuitry. PMID:25060095

  8. Children's Understanding of Large-Scale Mapping Tasks: An Analysis of Talk, Drawings, and Gesture

    ERIC Educational Resources Information Center

    Kotsopoulos, Donna; Cordy, Michelle; Langemeyer, Melanie

    2015-01-01

    This research examined how children represent motion in large-scale mapping tasks that we referred to as "motion maps". The underlying mathematical content was transformational geometry. In total, 19 children, 8- to 10-year-old, created motion maps and captured their motion maps with accompanying verbal description digitally. Analysis of…

  9. An Analytical Comparison of the Fidelity of "Large Motion" Versus "Small Motion" Flight Simulators in a Rotorcraft Side-Step Task

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1999-01-01

    This paper presents an analytical and experimental methodology for studying flight simulator fidelity. The task was a rotorcraft bob-up/down maneuver in which vertical acceleration constituted the motion cue. The task considered here is aside-step maneuver that differs from the bob-up one important way: both roll and lateral acceleration cues are available to the pilot. It has been communicated to the author that in some Verticle Motion Simulator (VMS) studies, the lateral acceleration cue has been found to be the most important. It is of some interest to hypothesize how this motion cue associated with "outer-loop" lateral translation fits into the modeling procedure where only "inner-loop " motion cues were considered. This Note is an attempt at formulating such an hypothesis and analytically comparing a large-motion simulator, e.g., the VMS, with a small-motion simulator, e.g., a hexapod.

  10. Auditory Processing in Specific Language Impairment (SLI): Relations With the Perception of Lexical and Phrasal Stress.

    PubMed

    Richards, Susan; Goswami, Usha

    2015-08-01

    We investigated whether impaired acoustic processing is a factor in developmental language disorders. The amplitude envelope of the speech signal is known to be important in language processing. We examined whether impaired perception of amplitude envelope rise time is related to impaired perception of lexical and phrasal stress in children with specific language impairment (SLI). Twenty-two children aged between 8 and 12 years participated in this study. Twelve had SLI; 10 were typically developing controls. All children completed psychoacoustic tasks measuring rise time, intensity, frequency, and duration discrimination. They also completed 2 linguistic stress tasks measuring lexical and phrasal stress perception. The SLI group scored significantly below the typically developing controls on both stress perception tasks. Performance on stress tasks correlated with individual differences in auditory sensitivity. Rise time and frequency thresholds accounted for the most unique variance. Digit Span also contributed to task success for the SLI group. The SLI group had difficulties with both acoustic and stress perception tasks. Our data suggest that poor sensitivity to amplitude rise time and sound frequency significantly contributes to the stress perception skills of children with SLI. Other cognitive factors such as phonological memory are also implicated.

  11. Evaluation of several secondary tasks in the determination of permissible time delays in simulator visual and motion cues

    NASA Technical Reports Server (NTRS)

    Miller, G. K., Jr.; Riley, D. R.

    1978-01-01

    The effect of secondary tasks in determining permissible time delays in visual-motion simulation of a pursuit tracking task was examined. A single subject, a single set of aircraft handling qualities, and a single motion condition in tracking a target aircraft that oscillates sinusoidally in altitude were used. In addition to the basic simulator delays the results indicate that the permissible time delay is about 250 msec for either a tapping task, an adding task, or an audio task and is approximately 125 msec less than when no secondary task is involved. The magnitudes of the primary task performance measures, however, differ only for the tapping task. A power spectraldensity analysis basically confirms the result by comparing the root-mean-square performance measures. For all three secondary tasks, the total pilot workload was quite high.

  12. Neural dynamics of motion perception: direction fields, apertures, and resonant grouping.

    PubMed

    Grossberg, S; Mingolla, E

    1993-03-01

    A neural network model of global motion segmentation by visual cortex is described. Called the motion boundary contour system (BCS), the model clarifies how ambiguous local movements on a complex moving shape are actively reorganized into a coherent global motion signal. Unlike many previous researchers, we analyze how a coherent motion signal is imparted to all regions of a moving figure, not only to regions at which unambiguous motion signals exist. The model hereby suggests a solution to the global aperture problem. The motion BCS describes how preprocessing of motion signals by a motion oriented contrast (MOC) filter is joined to long-range cooperative grouping mechanisms in a motion cooperative-competitive (MOCC) loop to control phenomena such as motion capture. The motion BCS is computed in parallel with the static BCS of Grossberg and Mingolla (1985a, 1985b, 1987). Homologous properties of the motion BCS and the static BCS, specialized to process motion directions and static orientations, respectively, support a unified explanation of many data about static form perception and motion form perception that have heretofore been unexplained or treated separately. Predictions about microscopic computational differences of the parallel cortical streams V1-->MT and V1-->V2-->MT are made--notably, the magnocellular thick stripe and parvocellular interstripe streams. It is shown how the motion BCS can compute motion directions that may be synthesized from multiple orientations with opposite directions of contrast. Interactions of model simple cells, complex cells, hyper-complex cells, and bipole cells are described, with special emphasis given to new functional roles in direction disambiguation for endstopping at multiple processing stages and to the dynamic interplay of spatially short-range and long-range interactions.

  13. Electromotile hearing: Acoustic tones mask psychophysical response to high-frequency electrical stimulation of intact guinea pig cochleaea)

    PubMed Central

    Le Prell, Colleen G.; Kawamoto, Kohei; Raphael, Yehoash; Dolan, David F.

    2011-01-01

    When sinusoidal electric stimulation is applied to the intact cochlea, a frequency-specific acoustic emission can be recorded in the ear canal. Acoustic emissions are produced by basilar membrane motion, and have been used to suggest a corresponding acoustic sensation termed “electromotile hearing.” Electromotile hearing has been specifically attributed to electric stimulation of outer hair cells in the intact organ of Corti. To determine the nature of the auditory perception produced by electric stimulation of a cochlea with intact outer hair cells, we tested guinea pigs in a psychophysical task. First, subjects were trained to report detection of sinusoidal acoustic stimuli and dynamic range was assessed using response latency. Subjects were then implanted with a ball electrode placed into scala tympani. Following the surgical implant procedure, subjects were transferred to a task in which acoustic signals were replaced by sinusoidal electric stimulation, and dynamic range was assessed again. Finally, the ability of acoustic pure-tone stimuli to mask the detection of the electric signals was assessed. Based on the masking effects, we conclude that sinusoidal electric stimulation of the intact cochlea results in perception of a tonal (rather than a broad-band or noisy) sound at a frequency of 8 kHz or above. PMID:17225416

  14. Is improved contrast sensitivity a natural consequence of visual training?

    PubMed Central

    Levi, Aaron; Shaked, Danielle; Tadin, Duje; Huxlin, Krystel R.

    2015-01-01

    Many studies have shown that training and testing conditions modulate specificity of visual learning to trained stimuli and tasks. In visually impaired populations, generalizability of visual learning to untrained stimuli/tasks is almost always reported, with contrast sensitivity (CS) featuring prominently among these collaterally-improved functions. To understand factors underlying this difference, we measured CS for direction and orientation discrimination in the visual periphery of three groups of visually-intact subjects. Group 1 trained on an orientation discrimination task with static Gabors whose luminance contrast was decreased as performance improved. Group 2 trained on a global direction discrimination task using high-contrast random dot stimuli previously used to recover motion perception in cortically blind patients. Group 3 underwent no training. Both forms of training improved CS with some degree of specificity for basic attributes of the trained stimulus/task. Group 1's largest enhancement was in CS around the trained spatial/temporal frequencies; similarly, Group 2's largest improvements occurred in CS for discriminating moving and flickering stimuli. Group 3 saw no significant CS changes. These results indicate that CS improvements may be a natural consequence of multiple forms of visual training in visually intact humans, albeit with some specificity to the trained visual domain(s). PMID:26305736

  15. The end-state comfort effect in bimanual grip selection.

    PubMed

    Fischman, Mark G; Stodden, David F; Lehman, Davana M

    2003-03-01

    During a unimanual grip selection task in which people pick up a lightweight dowel and place one end against targets at variable heights, the choice of hand grip (overhand vs. underhand) typically depends on the perception of how comfortable the arm will be at the end of the movement: an end-state comfort effect. The two experiments reported here extend this work to bimanual tasks. In each experiment, 26 right-handed participants used their left and right hands to simultaneously pick up two wooden dowels and place either the right or left end against a series of 14 targets ranging from 14 to 210 cm above the floor. These tasks were performed in systematic ascending and descending orders in Experiment 1 and in random order in Expiment 2. Results were generally consistent with predictions of end-state comfort in that, for the extreme highest and lowest targets, participants tended to select opposite grips with each hand. Taken together, our findings are consistent with the concept of constraint hierarchies within a posture-based motion-planning model.

  16. "Visual" Cortex of Congenitally Blind Adults Responds to Syntactic Movement.

    PubMed

    Lane, Connor; Kanjlia, Shipra; Omaki, Akira; Bedny, Marina

    2015-09-16

    Human cortex is comprised of specialized networks that support functions, such as visual motion perception and language processing. How do genes and experience contribute to this specialization? Studies of plasticity offer unique insights into this question. In congenitally blind individuals, "visual" cortex responds to auditory and tactile stimuli. Remarkably, recent evidence suggests that occipital areas participate in language processing. We asked whether in blindness, occipital cortices: (1) develop domain-specific responses to language and (2) respond to a highly specialized aspect of language-syntactic movement. Nineteen congenitally blind and 18 sighted participants took part in two fMRI experiments. We report that in congenitally blind individuals, but not in sighted controls, "visual" cortex is more active during sentence comprehension than during a sequence memory task with nonwords, or a symbolic math task. This suggests that areas of occipital cortex become selective for language, relative to other similar higher-cognitive tasks. Crucially, we find that these occipital areas respond more to sentences with syntactic movement but do not respond to the difficulty of math equations. We conclude that regions within the visual cortex of blind adults are involved in syntactic processing. Our findings suggest that the cognitive function of human cortical areas is largely determined by input during development. Human cortex is made up of specialized regions that perform different functions, such as visual motion perception and language processing. How do genes and experience contribute to this specialization? Studies of plasticity show that cortical areas can change function from one sensory modality to another. Here we demonstrate that input during development can alter cortical function even more dramatically. In blindness a subset of "visual" areas becomes specialized for language processing. Crucially, we find that the same "visual" areas respond to a highly specialized and uniquely human aspect of language-syntactic movement. These data suggest that human cortex has broad functional capacity during development, and input plays a major role in determining functional specialization. Copyright © 2015 the authors 0270-6474/15/3512859-10$15.00/0.

  17. Dual-Arm Generalized Compliant Motion With Shared Control

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.

    1994-01-01

    Dual-Arm Generalized Compliant Motion (DAGCM) primitive computer program implementing improved unified control scheme for two manipulator arms cooperating in task in which both grasp same object. Provides capabilities for autonomous, teleoperation, and shared control of two robot arms. Unifies cooperative dual-arm control with multi-sensor-based task control and makes complete task-control capability available to higher-level task-planning computer system via large set of input parameters used to describe desired force and position trajectories followed by manipulator arms. Some concepts discussed in "A Generalized-Compliant-Motion Primitive" (NPO-18134).

  18. Left occipitotemporal cortex contributes to the discrimination of tool-associated hand actions: fMRI and TMS evidence

    PubMed Central

    Perini, Francesca; Caramazza, Alfonso; Peelen, Marius V.

    2014-01-01

    Functional neuroimaging studies have implicated the left lateral occipitotemporal cortex (LOTC) in both tool and hand perception but the functional role of this region is not fully known. Here, by using a task manipulation, we tested whether tool-/hand-selective LOTC contributes to the discrimination of tool-associated hand actions. Participants viewed briefly presented pictures of kitchen and garage tools while they performed one of two tasks: in the action task, they judged whether the tool is associated with a hand rotation action (e.g., screwdriver) or a hand squeeze action (e.g., garlic press), while in the location task they judged whether the tool is typically found in the kitchen (e.g., garlic press) or in the garage (e.g., screwdriver). Both tasks were performed on the same stimulus set and were matched for difficulty. Contrasting fMRI responses between these tasks showed stronger activity during the action task than the location task in both tool- and hand-selective LOTC regions, which closely overlapped. No differences were found in nearby object- and motion-selective control regions. Importantly, these findings were confirmed by a TMS study, which showed that effective TMS over the tool-/hand-selective LOTC region significantly slowed responses for tool action discriminations relative to tool location discriminations, with no such difference during sham TMS. We conclude that left LOTC contributes to the discrimination of tool-associated hand actions. PMID:25140142

  19. Alternatives to lifting concrete masonry blocks onto rebar: biomechanical and perceptual evaluations.

    PubMed

    Hess, J A; Mizner, R L; Kincl, L; Anton, D

    2012-01-01

    This study examined the use of and barriers to H-block and high lift grouting, two alternatives to lifting concrete masonry blocks onto vertical rebar. Peak and cumulative shoulder motions were evaluated, as well as adoption barriers: H-block cost and stakeholder perceptions. Results indicated that using the alternatives significantly decreased peak shoulder flexion (p < 0.001). A case study indicated that building cost was higher with H-block, but the difference was less than 2% of the total cost. Contractors and specifiers reported important differences in perceptions, work norms, and material use and practices. For example, 48% of specifiers reported that use of high lift grouting was the contractor's choice, while 28% of contractors thought it must be specified. Use of H-block or high-lift grouting should be considered as methods to reduce awkward upper extremity postures. Cost and stakeholders' other perceptions present barriers that are important considerations when developing diffusion strategies for these alternatives. This study provides information from several perspectives about ergonomic controls for a high risk bricklaying task, which will benefit occupational safety experts, health professionals and ergonomists. It adds to the understanding of shoulder stresses, material cost and stakeholder perceptions that will contribute to developing effective diffusion strategies.

  20. A tactile display for international space station (ISS) extravehicular activity (EVA).

    PubMed

    Rochlis, J L; Newman, D J

    2000-06-01

    A tactile display to increase an astronaut's situational awareness during an extravehicular activity (EVA) has been developed and ground tested. The Tactor Locator System (TLS) is a non-intrusive, intuitive display capable of conveying position and velocity information via a vibrotactile stimulus applied to the subject's neck and torso. In the Earth's 1 G environment, perception of position and velocity is determined by the body's individual sensory systems. Under normal sensory conditions, redundant information from these sensory systems provides humans with an accurate sense of their position and motion. However, altered environments, including exposure to weightlessness, can lead to conflicting visual and vestibular cues, resulting in decreased situational awareness. The TLS was designed to provide somatosensory cues to complement the visual system during EVA operations. An EVA task was simulated on a computer graphics workstation with a display of the International Space Station (ISS) and a target astronaut at an unknown location. Subjects were required to move about the ISS and acquire the target astronaut using either an auditory cue at the outset, or the TLS. Subjects used a 6 degree of freedom input device to command translational and rotational motion. The TLS was configured to act as a position aid, providing target direction information to the subject through a localized stimulus. Results show that the TLS decreases reaction time (p = 0.001) and movement time (p = 0.001) for simulated subject (astronaut) motion around the ISS. The TLS is a useful aid in increasing an astronaut's situational awareness, and warrants further testing to explore other uses, tasks and configurations.

  1. Visually guided control of movement in the context of multimodal stimulation

    NASA Technical Reports Server (NTRS)

    Riccio, Gary E.

    1991-01-01

    Flight simulation has been almost exclusively concerned with simulating the motions of the aircraft. Physically distinct subsystems are often combined to simulate the varieties of aircraft motion. Visual display systems simulate the motion of the aircraft relative to remote objects and surfaces (e.g., other aircraft and the terrain). 'Motion platform' simulators recreate aircraft motion relative to the gravitoinertial vector (i.e., correlated rotation and tilt as opposed to the 'coordinated turn' in flight). 'Control loaders' attempt to simulate the resistance of the aerodynamic medium to aircraft motion. However, there are few operational systems that attempt to simulate the motion of the pilot relative to the aircraft and the gravitoinertial vector. The design and use of all simulators is limited by poor understanding of postural control in the aircraft and its effect on the perception and control of flight. Analysis of the perception and control of flight (real or simulated) must consider that: (1) the pilot is not rigidly attached to the aircraft; and (2) the pilot actively monitors and adjusts body orientation and configuration in the aircraft. It is argued that this more complete approach to flight simulation requires that multimodal perception be considered as the rule rather than the exception. Moreover, the necessity of multimodal perception is revealed by emphasizing the complementarity rather than the redundancy among perceptual systems. Finally, an outline is presented for an experiment to be conducted at NASA ARC. The experiment explicitly considers possible consequences of coordination between postural and vehicular control.

  2. Effects of translational and rotational motions and display polarity on visual performance.

    PubMed

    Feng, Wen-Yang; Tseng, Feng-Yi; Chao, Chin-Jung; Lin, Chiuhsiang Joe

    2008-10-01

    This study investigated effects of both translational and rotational motion and display polarity on a visual identification task. Three different motion types--heave, roll, and pitch--were compared with the static (no motion) condition. The visual task was presented on two display polarities, black-on-white and white-on-black. The experiment was a 4 (motion conditions) x 2 (display polarities) within-subjects design with eight subjects (six men and two women; M age = 25.6 yr., SD = 3.2). The dependent variables used to assess the performance on the visual task were accuracy and reaction time. Motion environments, especially the roll condition, had statistically significant effects on the decrement of accuracy and reaction time. The display polarity was significant only in the static condition.

  3. Comparison of flying qualities derived from in-flight and ground-based simulators for a jet-transport airplane for the approach and landing pilot tasks

    NASA Technical Reports Server (NTRS)

    Grantham, William D.

    1989-01-01

    The primary objective was to provide information to the flight controls/flying qualities engineer that will assist him in determining the incremental flying qualities and/or pilot-performance differences that may be expected between results obtained via ground-based simulation (and, in particular, the six-degree-of-freedom Langley Visual/Motion Simulator (VMS)) and flight tests. Pilot opinion and performance parameters derived from a ground-based simulator and an in-flight simulator are compared for a jet-transport airplane having 32 different longitudinal dynamic response characteristics. The primary pilot tasks were the approach and landing tasks with emphasis on the landing-flare task. The results indicate that, in general, flying qualities results obtained from the ground-based simulator may be considered conservative-especially when the pilot task requires tight pilot control as during the landing flare. The one exception to this, according to the present study, was that the pilots were more tolerant of large time delays in the airplane response on the ground-based simulator. The results also indicated that the ground-based simulator (particularly the Langley VMS) is not adequate for assessing pilot/vehicle performance capabilities (i.e., the sink rate performance for the landing-flare task when the pilot has little depth/height perception from the outside scene presentation).

  4. Normal form from biological motion despite impaired ventral stream function.

    PubMed

    Gilaie-Dotan, S; Bentin, S; Harel, M; Rees, G; Saygin, A P

    2011-04-01

    We explored the extent to which biological motion perception depends on ventral stream integration by studying LG, an unusual case of developmental visual agnosia. LG has significant ventral stream processing deficits but no discernable structural cortical abnormality. LG's intermediate visual areas and object-sensitive regions exhibit abnormal activation during visual object perception, in contrast to area V5/MT+ which responds normally to visual motion (Gilaie-Dotan, Perry, Bonneh, Malach, & Bentin, 2009). Here, in three studies we used point light displays, which require visual integration, in adaptive threshold experiments to examine LG's ability to detect form from biological and non-biological motion cues. LG's ability to detect and discriminate form from biological motion was similar to healthy controls. In contrast, he was significantly deficient in processing form from non-biological motion. Thus, LG can rely on biological motion cues to perceive human forms, but is considerably impaired in extracting form from non-biological motion. Finally, we found that while LG viewed biological motion, activity in a network of brain regions associated with processing biological motion was functionally correlated with his V5/MT+ activity, indicating that normal inputs from V5/MT+ might suffice to activate his action perception system. These results indicate that processing of biologically moving form can dissociate from other form processing in the ventral pathway. Furthermore, the present results indicate that integrative ventral stream processing is necessary for uncompromised processing of non-biological form from motion. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Premotor cortex is sensitive to auditory-visual congruence for biological motion.

    PubMed

    Wuerger, Sophie M; Parkes, Laura; Lewis, Penelope A; Crocker-Buque, Alex; Rutschmann, Roland; Meyer, Georg F

    2012-03-01

    The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.

  6. Effect of obesity on posture and hip joint moments during a standing task, and trunk forward flexion motion.

    PubMed

    Gilleard, W; Smith, T

    2007-02-01

    Effects of obesity on trunk forward flexion motion in sitting and standing, and postural adaptations and hip joint moment for a standing work task. Cross-sectional comparison of obese and normal weight groups. Ten obese subjects (waist girth 121.2+/-16.8 cm, body mass index (BMI) 38.9+/-6.6 kg m(-2)) and 10 age- and height-matched normal weight subjects (waist girth 79.6+/-6.4 cm, BMI 21.7+/-1.5 kg m(-2)). Trunk motion during seated and standing forward flexion, and trunk posture, hip joint moment and hip-to-bench distance during a simulated standing work task were recorded. Forward flexion motion of the thoracic segment and thoracolumbar spine was decreased for the obese group with no change in pelvic segment and hip joint motion. Obese subjects showed a more flexed trunk posture and increased hip joint moment and hip-to-bench distance for a simulated standing work task. Decreased range of forward flexion motion, differing effects within the trunk, altered posture during a standing work task and concomitant increases in hip joint moment give insight into the aetiology of functional decrements and musculoskeletal pain seen in obesity.

  7. Relationship between Speech Production and Perception in People Who Stutter.

    PubMed

    Lu, Chunming; Long, Yuhang; Zheng, Lifen; Shi, Guang; Liu, Li; Ding, Guosheng; Howell, Peter

    2016-01-01

    Speech production difficulties are apparent in people who stutter (PWS). PWS also have difficulties in speech perception compared to controls. It is unclear whether the speech perception difficulties in PWS are independent of, or related to, their speech production difficulties. To investigate this issue, functional MRI data were collected on 13 PWS and 13 controls whilst the participants performed a speech production task and a speech perception task. PWS performed poorer than controls in the perception task and the poorer performance was associated with a functional activity difference in the left anterior insula (part of the speech motor area) compared to controls. PWS also showed a functional activity difference in this and the surrounding area [left inferior frontal cortex (IFC)/anterior insula] in the production task compared to controls. Conjunction analysis showed that the functional activity differences between PWS and controls in the left IFC/anterior insula coincided across the perception and production tasks. Furthermore, Granger Causality Analysis on the resting-state fMRI data of the participants showed that the causal connection from the left IFC/anterior insula to an area in the left primary auditory cortex (Heschl's gyrus) differed significantly between PWS and controls. The strength of this connection correlated significantly with performance in the perception task. These results suggest that speech perception difficulties in PWS are associated with anomalous functional activity in the speech motor area, and the altered functional connectivity from this area to the auditory area plays a role in the speech perception difficulties of PWS.

  8. Social perception and aging: The relationship between aging and the perception of subtle changes in facial happiness and identity.

    PubMed

    Yang, Tao; Penton, Tegan; Köybaşı, Şerife Leman; Banissy, Michael J

    2017-09-01

    Previous findings suggest that older adults show impairments in the social perception of faces, including the perception of emotion and facial identity. The majority of this work has tended to examine performance on tasks involving young adult faces and prototypical emotions. While useful, this can influence performance differences between groups due to perceptual biases and limitations on task performance. Here we sought to examine how typical aging is associated with the perception of subtle changes in facial happiness and facial identity in older adult faces. We developed novel tasks that permitted the ability to assess facial happiness, facial identity, and non-social perception (object perception) across similar task parameters. We observe that aging is linked with declines in the ability to make fine-grained judgements in the perception of facial happiness and facial identity (from older adult faces), but not for non-social (object) perception. This pattern of results is discussed in relation to mechanisms that may contribute to declines in facial perceptual processing in older adulthood. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Man-in-the-loop study of filtering in airborne head tracking tasks

    NASA Technical Reports Server (NTRS)

    Lifshitz, S.; Merhav, S. J.

    1992-01-01

    A human-factors study is conducted of problems due to vibrations during the use of a helmet-mounted display (HMD) in tracking tasks whose major factors are target motion and head vibration. A method is proposed for improving aiming accuracy in such tracking tasks on the basis of (1) head-motion measurement and (2) the shifting of the reticle in the HMD in ways that inhibit much of the involuntary apparent motion of the reticle, relative to the target, and the nonvoluntary motion of the teleoperated device. The HMD inherently furnishes the visual feedback required by this scheme.

  10. Modeling of video compression effects on target acquisition performance

    NASA Astrophysics Data System (ADS)

    Cha, Jae H.; Preece, Bradley; Espinola, Richard L.

    2009-05-01

    The effect of video compression on image quality was investigated from the perspective of target acquisition performance modeling. Human perception tests were conducted recently at the U.S. Army RDECOM CERDEC NVESD, measuring identification (ID) performance on simulated military vehicle targets at various ranges. These videos were compressed with different quality and/or quantization levels utilizing motion JPEG, motion JPEG2000, and MPEG-4 encoding. To model the degradation on task performance, the loss in image quality is fit to an equivalent Gaussian MTF scaled by the Structural Similarity Image Metric (SSIM). Residual compression artifacts are treated as 3-D spatio-temporal noise. This 3-D noise is found by taking the difference of the uncompressed frame, with the estimated equivalent blur applied, and the corresponding compressed frame. Results show good agreement between the experimental data and the model prediction. This method has led to a predictive performance model for video compression by correlating various compression levels to particular blur and noise input parameters for NVESD target acquisition performance model suite.

  11. Multi-arm multilateral haptics-based immersive tele-robotic system (HITS) for improvised explosive device disposal

    NASA Astrophysics Data System (ADS)

    Erickson, David; Lacheray, Hervé; Lai, Gilbert; Haddadi, Amir

    2014-06-01

    This paper presents the latest advancements of the Haptics-based Immersive Tele-robotic System (HITS) project, a next generation Improvised Explosive Device (IED) disposal (IEDD) robotic interface containing an immersive telepresence environment for a remotely-controlled three-articulated-robotic-arm system. While the haptic feedback enhances the operator's perception of the remote environment, a third teleoperated dexterous arm, equipped with multiple vision sensors and cameras, provides stereo vision with proper visual cues, and a 3D photo-realistic model of the potential IED. This decentralized system combines various capabilities including stable and scaled motion, singularity avoidance, cross-coupled hybrid control, active collision detection and avoidance, compliance control and constrained motion to provide a safe and intuitive control environment for the operators. Experimental results and validation of the current system are presented through various essential IEDD tasks. This project demonstrates that a two-armed anthropomorphic Explosive Ordnance Disposal (EOD) robot interface can achieve complex neutralization techniques against realistic IEDs without the operator approaching at any time.

  12. Modification of Motion Perception and Manual Control Following Short-Durations Spaceflight

    NASA Technical Reports Server (NTRS)

    Wood, S. J.; Vanya, R. D.; Esteves, J. T.; Rupert, A. H.; Clement, G.

    2011-01-01

    Adaptive changes during space flight in how the brain integrates vestibular cues with other sensory information can lead to impaired movement coordination and spatial disorientation following G-transitions. This ESA-NASA study was designed to examine both the physiological basis and operational implications for disorientation and tilt-translation disturbances following short-duration spaceflights. The goals of this study were to (1) examine the effects of stimulus frequency on adaptive changes in motion perception during passive tilt and translation motion, (2) quantify decrements in manual control of tilt motion, and (3) evaluate vibrotactile feedback as a sensorimotor countermeasure.

  13. Stereoscopic advantages for vection induced by radial, circular, and spiral optic flows.

    PubMed

    Palmisano, Stephen; Summersby, Stephanie; Davies, Rodney G; Kim, Juno

    2016-11-01

    Although observer motions project different patterns of optic flow to our left and right eyes, there has been surprisingly little research into potential stereoscopic contributions to self-motion perception. This study investigated whether visually induced illusory self-motion (i.e., vection) is influenced by the addition of consistent stereoscopic information to radial, circular, and spiral (i.e., combined radial + circular) patterns of optic flow. Stereoscopic vection advantages were found for radial and spiral (but not circular) flows when monocular motion signals were strong. Under these conditions, stereoscopic benefits were greater for spiral flow than for radial flow. These effects can be explained by differences in the motion aftereffects generated by these displays, which suggest that the circular motion component in spiral flow selectively reduced adaptation to stereoscopic motion-in-depth. Stereoscopic vection advantages were not observed for circular flow when monocular motion signals were strong, but emerged when monocular motion signals were weakened. These findings show that stereoscopic information can contribute to visual self-motion perception in multiple ways.

  14. Sensitivity to Spatiotemporal Percepts Predicts the Perception of Emotion

    PubMed Central

    Castro, Vanessa L.; Boone, R. Thomas

    2015-01-01

    The present studies examined how sensitivity to spatiotemporal percepts such as rhythm, angularity, configuration, and force predicts accuracy in perceiving emotion. In Study 1, participants (N = 99) completed a nonverbal test battery consisting of three nonverbal emotion perception tests and two perceptual sensitivity tasks assessing rhythm sensitivity and angularity sensitivity. Study 2 (N = 101) extended the findings of Study 1 with the addition of a fourth nonverbal test, a third configural sensitivity task, and a fourth force sensitivity task. Regression analyses across both studies revealed partial support for the association between perceptual sensitivity to spatiotemporal percepts and greater emotion perception accuracy. Results indicate that accuracy in perceiving emotions may be predicted by sensitivity to specific percepts embedded within channel- and emotion-specific displays. The significance of such research lies in the understanding of how individuals acquire emotion perception skill and the processes by which distinct features of percepts are related to the perception of emotion. PMID:26339111

  15. Motion generation of robotic surgical tasks: learning from expert demonstrations.

    PubMed

    Reiley, Carol E; Plaku, Erion; Hager, Gregory D

    2010-01-01

    Robotic surgical assistants offer the possibility of automating portions of a task that are time consuming and tedious in order to reduce the cognitive workload of a surgeon. This paper proposes using programming by demonstration to build generative models and generate smooth trajectories that capture the underlying structure of the motion data recorded from expert demonstrations. Specifically, motion data from Intuitive Surgical's da Vinci Surgical System of a panel of expert surgeons performing three surgical tasks are recorded. The trials are decomposed into subtasks or surgemes, which are then temporally aligned through dynamic time warping. Next, a Gaussian Mixture Model (GMM) encodes the experts' underlying motion structure. Gaussian Mixture Regression (GMR) is then used to extract a smooth reference trajectory to reproduce a trajectory of the task. The approach is evaluated through an automated skill assessment measurement. Results suggest that this paper presents a means to (i) extract important features of the task, (ii) create a metric to evaluate robot imitative performance (iii) generate smoother trajectories for reproduction of three common medical tasks.

  16. Applications of computer-graphics animation for motion-perception research

    NASA Technical Reports Server (NTRS)

    Proffitt, D. R.; Kaiser, M. K.

    1986-01-01

    The advantages and limitations of using computer animated stimuli in studying motion perception are presented and discussed. Most current programs of motion perception research could not be pursued without the use of computer graphics animation. Computer generated displays afford latitudes of freedom and control that are almost impossible to attain through conventional methods. There are, however, limitations to this presentational medium. At present, computer generated displays present simplified approximations of the dynamics in natural events. Very little is known about how the differences between natural events and computer simulations influence perceptual processing. In practice, the differences are assumed to be irrelevant to the questions under study, and that findings with computer generated stimuli will generalize to natural events.

  17. Impaired Activation of Visual Attention Network for Motion Salience Is Accompanied by Reduced Functional Connectivity between Frontal Eye Fields and Visual Cortex in Strabismic Amblyopia

    PubMed Central

    Wang, Hao; Crewther, Sheila G.; Liang, Minglong; Laycock, Robin; Yu, Tao; Alexander, Bonnie; Crewther, David P.; Wang, Jian; Yin, Zhengqin

    2017-01-01

    Strabismic amblyopia is now acknowledged to be more than a simple loss of acuity and to involve alterations in visually driven attention, though whether this applies to both stimulus-driven and goal-directed attention has not been explored. Hence we investigated monocular threshold performance during a motion salience-driven attention task involving detection of a coherent dot motion target in one of four quadrants in adult controls and those with strabismic amblyopia. Psychophysical motion thresholds were impaired for the strabismic amblyopic eye, requiring longer inspection time and consequently slower target speed for detection compared to the fellow eye or control eyes. We compared fMRI activation and functional connectivity between four ROIs of the occipital-parieto-frontal visual attention network [primary visual cortex (V1), motion sensitive area V5, intraparietal sulcus (IPS) and frontal eye fields (FEF)], during a suprathreshold version of the motion-driven attention task, and also a simple goal-directed task, requiring voluntary saccades to targets randomly appearing along a horizontal line. Activation was compared when viewed monocularly by controls and the amblyopic and its fellow eye in strabismics. BOLD activation was weaker in IPS, FEF and V5 for both tasks when viewing through the amblyopic eye compared to viewing through the fellow eye or control participants' non-dominant eye. No difference in V1 activation was seen between the amblyopic and fellow eye, nor between the two eyes of control participants during the motion salience task, though V1 activation was significantly less through the amblyopic eye than through the fellow eye and control group non-dominant eye viewing during the voluntary saccade task. Functional correlations of ROIs within the attention network were impaired through the amblyopic eye during the motion salience task, whereas this was not the case during the voluntary saccade task. Specifically, FEF showed reduced functional connectivity with visual cortical nodes during the motion salience task through the amblyopic eye, despite suprathreshold detection performance. This suggests that the reduced ability of the amblyopic eye to activate the frontal components of the attention networks may help explain the aberrant control of visual attention and eye movements in amblyopes. PMID:28484381

  18. Global form and motion processing in healthy ageing.

    PubMed

    Agnew, Hannah C; Phillips, Louise H; Pilz, Karin S

    2016-05-01

    The ability to perceive biological motion has been shown to deteriorate with age, and it is assumed that older adults rely more on the global form than local motion information when processing point-light walkers. Further, it has been suggested that biological motion processing in ageing is related to a form-based global processing bias. Here, we investigated the relationship between older adults' preference for form information when processing point-light actions and an age-related form-based global processing bias. In a first task, we asked older (>60years) and younger adults (19-23years) to sequentially match three different point-light actions; normal actions that contained local motion and global form information, scrambled actions that contained primarily local motion information, and random-position actions that contained primarily global form information. Both age groups overall performed above chance in all three conditions, and were more accurate for actions that contained global form information. For random-position actions, older adults were less accurate than younger adults but there was no age-difference for normal or scrambled actions. These results indicate that both age groups rely more on global form than local motion to match point-light actions, but can use local motion on its own to match point-light actions. In a second task, we investigated form-based global processing biases using the Navon task. In general, participants were better at discriminating the local letters but faster at discriminating global letters. Correlations showed that there was no significant linear relationship between performance in the Navon task and biological motion processing, which suggests that processing biases in form- and motion-based tasks are unrelated. Copyright © 2016. Published by Elsevier B.V.

  19. Neurophysiological and Behavioural Correlates of Coherent Motion Perception in Dyslexia

    ERIC Educational Resources Information Center

    Taroyan, Naira A.; Nicolson, Roderick I.; Buckley, David

    2011-01-01

    Coherent motion perception was tested in nine adolescents with dyslexia and 10 control participants matched for age and IQ using low contrast stimuli with three levels of coherence (10%, 25% and 40%). Event-related potentials (ERPs) and behavioural performance data were obtained. No significant between-group differences were found in performance…

  20. Division of Labor in Two-Earner Homes: Task Accomplishment versus Household Management as Critical Variables in Perceptions about Family Work.

    ERIC Educational Resources Information Center

    Mederer, Helen J.

    1993-01-01

    Data from 359 married, full-time employed women tested extent to which allocation of tasks and allocation of household management predict perceptions of fairness and conflict. Task and management allocation contributed independently and differently to perceptions of fairness and conflict about housework allocation. Unfairness was predicted by both…

  1. Learners' Perceptions of the Benefits of Voice Tool-Based Tasks on Their Spoken Performance

    ERIC Educational Resources Information Center

    Wilches, Astrid

    2014-01-01

    The purpose of this study is to investigate learners' perceptions of the benefits of tasks using voice tools to reinforce their oral skills. Additionally, this study seeks to determine what aspects of task design affected the students' perceptions. Beginner learners aged 18 to 36 with little or no experience in the use of technological tools for…

  2. Nonlinear analysis of saccade speed fluctuations during combined action and perception tasks

    PubMed Central

    Stan, C.; Astefanoaei, C.; Pretegiani, E.; Optican, L.; Creanga, D.; Rufa, A.; Cristescu, C.P.

    2014-01-01

    Background: Saccades are rapid eye movements used to gather information about a scene which requires both action and perception. These are usually studied separately, so that how perception influences action is not well understood. In a dual task, where the subject looks at a target and reports a decision, subtle changes in the saccades might be caused by action-perception interactions. Studying saccades might provide insight into how brain pathways for action and for perception interact. New method: We applied two complementary methods, multifractal detrended fluctuation analysis and Lempel-Ziv complexity index to eye peak speed recorded in two experiments, a pure action task and a combined action-perception task. Results: Multifractality strength is significantly different in the two experiments, showing smaller values for dual decision task saccades compared to simple-task saccades. The normalized Lempel-Ziv complexity index behaves similarly i.e. is significantly smaller in the decision saccade task than in the simple task. Comparison with existing methods: Compared to the usual statistical and linear approaches, these analyses emphasize the character of the dynamics involved in the fluctuations and offer a sensitive tool for quantitative evaluation of the multifractal features and of the complexity measure in the saccades peak speeds when different brain circuits are involved. Conclusion: Our results prove that the peak speed fluctuations have multifractal characteristics with lower magnitude for the multifractality strength and for the complexity index when two neural pathways are simultaneously activated, demonstrating the nonlinear interaction in the brain pathways for action and perception. PMID:24854830

  3. Global Ground Motion Prediction Equations Program | Just another WordPress

    Science.gov Websites

    Motion Task 2: Compile and Critically Review GMPEs Task 3: Select or Derive a Global Set of GMPEs Task 6 : Design the Specifications to Compile a Global Database of Soil Classification Task 5: Build a Database of Update on PEER's Global GMPEs Project from recent workshop in Turkey Posted on June 11, 2012 During May

  4. Effects of changes in size, speed and distance on the perception of curved 3D trajectories

    PubMed Central

    Zhang, Junjun; Braunstein, Myron L.; Andersen, George J.

    2012-01-01

    Previous research on the perception of 3D object motion has considered time to collision, time to passage, collision detection and judgments of speed and direction of motion, but has not directly studied the perception of the overall shape of the motion path. We examined the perception of the magnitude of curvature and sign of curvature of the motion path for objects moving at eye level in a horizontal plane parallel to the line of sight. We considered two sources of information for the perception of motion trajectories: changes in angular size and changes in angular speed. Three experiments examined judgments of relative curvature for objects moving at different distances. At the closest distance studied, accuracy was high with size information alone but near chance with speed information alone. At the greatest distance, accuracy with size information alone decreased sharply but accuracy for displays with both size and speed information remained high. We found similar results in two experiments with judgments of sign of curvature. Accuracy was higher for displays with both size and speed information than with size information alone, even when the speed information was based on parallel projections and was not informative about sign of curvature. For both magnitude of curvature and sign of curvature judgments, information indicating that the trajectory was curved increased accuracy, even when this information was not directly relevant to the required judgment. PMID:23007204

  5. Motion Perception and Driving: Predicting Performance Through Testing and Shortening Braking Reaction Times Through Training

    DTIC Science & Technology

    2013-12-01

    brake reaction time on the EB test from pre-post while there was no significant change for the control group : t(38)=2.24, p=0.03. Tests of 3D motion...0.61). In experiment 2, the motion perception training group had a significant decrease in brake reaction time on the EB test from pre- to...the following. The experiment was divided into 8 phases: a pretest , six training blocks (once per week), and a posttest . Participants were allocated

  6. Signed language and human action processing: evidence for functional constraints on the human mirror-neuron system.

    PubMed

    Corina, David P; Knapp, Heather Patterson

    2008-12-01

    In the quest to further understand the neural underpinning of human communication, researchers have turned to studies of naturally occurring signed languages used in Deaf communities. The comparison of the commonalities and differences between spoken and signed languages provides an opportunity to determine core neural systems responsible for linguistic communication independent of the modality in which a language is expressed. The present article examines such studies, and in addition asks what we can learn about human languages by contrasting formal visual-gestural linguistic systems (signed languages) with more general human action perception. To understand visual language perception, it is important to distinguish the demands of general human motion processing from the highly task-dependent demands associated with extracting linguistic meaning from arbitrary, conventionalized gestures. This endeavor is particularly important because theorists have suggested close homologies between perception and production of actions and functions of human language and social communication. We review recent behavioral, functional imaging, and neuropsychological studies that explore dissociations between the processing of human actions and signed languages. These data suggest incomplete overlap between the mirror-neuron systems proposed to mediate human action and language.

  7. Effect of Speed of Processing Training on Older Driver Screening Measures.

    PubMed

    Eramudugolla, Ranmalee; Kiely, Kim M; Chopra, Sidhant; Anstey, Kaarin J

    2017-01-01

    Objective: Computerized training for cognitive enhancement is of great public interest, however, there is inconsistent evidence for the transfer of training gains to every day activity. Several large trials have focused on speed of processing (SOP) training with some promising findings for long-term effects on daily activity, but no immediate transfer to other cognitive tests. Here, we examine the transfer of SOP training gains to cognitive measures that are known predictors of driving safety in older adults. Methods: Fifty-three adults aged 65-87 years who were current drivers participated in a two group non-randomized design with repeated measures and a no-contact matched control group. The Intervention group completed an average of 7.9 ( SD = 3.0) hours of self-administered online SOP training at home. Control group was matched on age, gender and test-re-test interval. Measures included the Useful Field of View (UFOV) test, a Hazard Perception test, choice reaction time (Cars RT), Trail Making Test B, a Maze test, visual motion threshold, as well as road craft and road knowledge tests. Results: Speed of processing training resulted in significant improvement in processing speed on the UFOV test relative to controls, with an average change of -45.8 ms ( SE = 14.5), and effect size of ω 2 = 0.21. Performance on the Maze test also improved, but significant slowing on the Hazard Perception test was observed after SOP training. Training effects on the UFOV task was associated with similar effects on the Cars RT, but not the Hazard Perception and Maze tests, suggesting transfer to some but not all driving related measures. There were no effects of training on any of the other measures examined. Conclusion: Speed of processing training effects on the UFOV task can be achieved with self-administered, online training at home, with some transfer to other cognitive tests. However, differential effects of training may be observed for tasks requiring goal-directed search strategies rather than diffuse attention.

  8. Investigating speech perception in children with dyslexia: is there evidence of a consistent deficit in individuals?

    PubMed Central

    Messaoud-Galusi, Souhila; Hazan, Valerie; Rosen, Stuart

    2012-01-01

    Purpose The claim that speech perception abilities are impaired in dyslexia was investigated in a group of 62 dyslexic children and 51 average readers matched in age. Method To test whether there was robust evidence of speech perception deficits in children with dyslexia, speech perception in noise and quiet was measured using eight different tasks involving the identification and discrimination of a complex and highly natural synthetic ‘pea’-‘bee’ contrast (copy synthesised from natural models) and the perception of naturally-produced words. Results Children with dyslexia, on average, performed more poorly than average readers in the synthetic syllables identification task in quiet and in across-category discrimination (but not when tested using an adaptive procedure). They did not differ from average readers on two tasks of word recognition in noise or identification of synthetic syllables in noise. For all tasks, a majority of individual children with dyslexia performed within norms. Finally, speech perception generally did not correlate with pseudo-word reading or phonological processing, the core skills related to dyslexia. Conclusions On the tasks and speech stimuli we used, most children with dyslexia do not appear to show a consistent deficit in speech perception. PMID:21930615

  9. Global motion perception deficits in autism are reflected as early as primary visual cortex.

    PubMed

    Robertson, Caroline E; Thomas, Cibu; Kravitz, Dwight J; Wallace, Gregory L; Baron-Cohen, Simon; Martin, Alex; Baker, Chris I

    2014-09-01

    Individuals with autism are often characterized as 'seeing the trees, but not the forest'-attuned to individual details in the visual world at the expense of the global percept they compose. Here, we tested the extent to which global processing deficits in autism reflect impairments in (i) primary visual processing; or (ii) decision-formation, using an archetypal example of global perception, coherent motion perception. In an event-related functional MRI experiment, 43 intelligence quotient and age-matched male participants (21 with autism, age range 15-27 years) performed a series of coherent motion perception judgements in which the amount of local motion signals available to be integrated into a global percept was varied by controlling stimulus viewing duration (0.2 or 0.6 s) and the proportion of dots moving in the correct direction (coherence: 4%, 15%, 30%, 50%, or 75%). Both typical participants and those with autism evidenced the same basic pattern of accuracy in judging the direction of motion, with performance decreasing with reduced coherence and shorter viewing durations. Critically, these effects were exaggerated in autism: despite equal performance at the long duration, performance was more strongly reduced by shortening viewing duration in autism (P < 0.015) and decreasing stimulus coherence (P < 0.008). To assess the neural correlates of these effects we focused on the responses of primary visual cortex and the middle temporal area, critical in the early visual processing of motion signals, as well as a region in the intraparietal sulcus thought to be involved in perceptual decision-making. The behavioural results were mirrored in both primary visual cortex and the middle temporal area, with a greater reduction in response at short, compared with long, viewing durations in autism compared with controls (both P < 0.018). In contrast, there was no difference between the groups in the intraparietal sulcus (P > 0.574). These findings suggest that reduced global motion perception in autism is driven by an atypical response early in visual processing and may reflect a fundamental perturbation in neural circuitry. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Relationship between Speech Production and Perception in People Who Stutter

    PubMed Central

    Lu, Chunming; Long, Yuhang; Zheng, Lifen; Shi, Guang; Liu, Li; Ding, Guosheng; Howell, Peter

    2016-01-01

    Speech production difficulties are apparent in people who stutter (PWS). PWS also have difficulties in speech perception compared to controls. It is unclear whether the speech perception difficulties in PWS are independent of, or related to, their speech production difficulties. To investigate this issue, functional MRI data were collected on 13 PWS and 13 controls whilst the participants performed a speech production task and a speech perception task. PWS performed poorer than controls in the perception task and the poorer performance was associated with a functional activity difference in the left anterior insula (part of the speech motor area) compared to controls. PWS also showed a functional activity difference in this and the surrounding area [left inferior frontal cortex (IFC)/anterior insula] in the production task compared to controls. Conjunction analysis showed that the functional activity differences between PWS and controls in the left IFC/anterior insula coincided across the perception and production tasks. Furthermore, Granger Causality Analysis on the resting-state fMRI data of the participants showed that the causal connection from the left IFC/anterior insula to an area in the left primary auditory cortex (Heschl’s gyrus) differed significantly between PWS and controls. The strength of this connection correlated significantly with performance in the perception task. These results suggest that speech perception difficulties in PWS are associated with anomalous functional activity in the speech motor area, and the altered functional connectivity from this area to the auditory area plays a role in the speech perception difficulties of PWS. PMID:27242487

  11. On the Relationship between Memory and Perception: Sequential Dependencies in Recognition Memory Testing

    ERIC Educational Resources Information Center

    Malmberg, Kenneth J.; Annis, Jeffrey

    2012-01-01

    Many models of recognition are derived from models originally applied to perception tasks, which assume that decisions from trial to trial are independent. While the independence assumption is violated for many perception tasks, we present the results of several experiments intended to relate memory and perception by exploring sequential…

  12. Singularity-robustness and task-prioritization in configuration control of redundant robots

    NASA Technical Reports Server (NTRS)

    Seraji, H.; Colbaugh, R.

    1990-01-01

    The authors present a singularity-robust task-prioritized reformulation of the configuration control for redundant robot manipulators. This reformation suppresses large joint velocities to induce minimal errors in the task performance by modifying the task trajectories. Furthermore, the same framework provides a means for assignment of priorities between the basic task of end-effector motion and the user-defined additional task for utilizing redundancy. This allows automatic relaxation of the additional task constraints in favor of the desired end-effector motion when both cannot be achieved exactly.

  13. Categorization of compensatory motions in transradial myoelectric prosthesis users.

    PubMed

    Hussaini, Ali; Zinck, Arthur; Kyberd, Peter

    2017-06-01

    Prosthesis users perform various compensatory motions to accommodate for the loss of the hand and wrist as well as the reduced functionality of a prosthetic hand. Investigate different compensation strategies that are performed by prosthesis users. Comparative analysis. A total of 20 able-bodied subjects and 4 prosthesis users performed a set of bimanual activities. Movements of the trunk and head were recorded using a motion capture system and a digital video recorder. Clinical motion angles were calculated to assess the compensatory motions made by the prosthesis users. The video recording also assisted in visually identifying the compensations. Compensatory motions by the prosthesis users were evident in the tasks performed (slicing and stirring activities) as compared to the benchmark of able-bodied subjects. Compensations took the form of a measured increase in range of motion, an observed adoption of a new posture during task execution, and prepositioning of items in the workspace prior to initiating a given task. Compensatory motions were performed by prosthesis users during the selected tasks. These can be categorized into three different types of compensations. Clinical relevance Proper identification and classification of compensatory motions performed by prosthesis users into three distinct forms allows clinicians and researchers to accurately identify and quantify movement. It will assist in evaluating new prosthetic interventions by providing distinct terminology that is easily understood and can be shared between research institutions.

  14. Exposure to a Rotating Virtual Environment During Treadmill Locomotion Causes Adaptation in Heading Direction

    NASA Technical Reports Server (NTRS)

    Ruttley, T; Marshburn, A.; Bloomberg, J. J.; Mulavara, A. P.; Richards, J. T.; Nomura, Y.

    2005-01-01

    The goal of the present study was to investigate the adaptive effects of variation in the direction of optic flow, experienced during linear treadmill walking, on modifying locomotor trajectory. Subjects (n = 30) walked on a motorized linear treadmill at 4.0 kilometers per hour for 24 minutes while viewing the interior of a 3D virtual scene projected onto a screen 1.5 in in front of them. The virtual scene depicted constant self-motion equivalent to either 1) walking around the perimeter of a room to one s left (Rotating Room group) 2) walking down the center of a hallway (Infinite Hallway group). The scene was static for the first 4 minutes, and then constant rate self-motion was simulated for the remaining 20 minutes. Before and after the treadmill locomotion adaptation period, subjects performed five stepping trials where in each trial they marched in place to the beat of a metronome at 90 steps/min while blindfolded in a quiet room. The subject's final heading direction (deg), final X (for-aft, cm) and final Y (medio-lateral, cm) positions were measured for each trial. During the treadmill locomotion adaptation period subject's 3D torso position was measured. We found that subjects in the Rotating Room group as compared to the Infinite Hallway group: 1) showed significantly greater deviation during post exposure testing in the heading direction and Y position opposite to the direction of optic flow experienced during treadmill walking 2) showed a significant monotonically increasing torso yaw angular rotation bias in the direction of optic flow during the treadmill adaptation exposure period. Subjects in both groups showed greater forward translation (in the +X direction) during the post treadmill stepping task that differed significantly from their pre exposure performance. Subjects in both groups reported no perceptual deviation in position during the stepping tasks. We infer that viewing simulated rotary self-motion during treadmill locomotion causes adaptive modification of sensory-motor integration in the control of position and trajectory during locomotion which functionally reflects adaptive changes in the integration of visual, vestibular, and proprioceptive cues. Such an adaptation in the control of position and heading direction during locomotion due to the congruence of sensory information demonstrates the potential for adaptive transfer between sensorimotor systems and suggests a common neural site for the processing and self-motion perception and concurrent adaptation in motor output. This will result in lack of subjects perception of deviation of position and trajectory during the post treadmill step test while blind folded.

  15. Exposure to a Rotating Virtual Environment During Treadmill Locomotion Causes Adaptation in Heading Direction

    NASA Technical Reports Server (NTRS)

    Mulavara, A. P.; Richards, J. T.; Marshburn, A.; Nomura, Y.; Bloomberg, J. J.

    2005-01-01

    The goal of the present study was to investigate the adaptive effects of variation in the direction of optic flow, experienced during linear treadmill walking, on modifying locomotor trajectory. Subjects (n = 30) walked on a motorized linear treadmill at 4.0 km/h for 24 minutes while viewing the interior of a 3D virtual scene projected onto a screen 1.5 m in front of them. The virtual scene depicted constant self-motion equivalent to either 1) walking around the perimeter of a room to one s left (Rotating Room group) 2) walking down the center of a hallway (Infinite Hallway group). The scene was static for the first 4 minutes, and then constant rate self-motion was simulated for the remaining 20 minutes. Before and after the treadmill locomotion adaptation period, subjects performed five stepping trials where in each trial they marched in place to the beat of a metronome at 90 steps/min while blindfolded in a quiet room. The subject s final heading direction (deg), final X (for-aft, cm) and final Y (medio-lateral, cm) positions were measured for each trial. During the treadmill locomotion adaptation period subject s 3D torso position was measured. We found that subjects in the Rotating Room group as compared to the Infinite Hallway group: 1) showed significantly greater deviation during post exposure testing in the heading direction and Y position opposite to the direction of optic flow experienced during treadmill walking 2) showed a significant monotonically increasing torso yaw angular rotation bias in the direction of optic flow during the treadmill adaptation exposure period. Subjects in both groups showed greater forward translation (in the +X direction) during the post treadmill stepping task that differed significantly from their pre exposure performance. Subjects in both groups reported no perceptual deviation in position during the stepping tasks. We infer that 3 viewing simulated rotary self-motion during treadmill locomotion causes adaptive modification of sensory-motor integration in the control of position and trajectory during locomotion which functionally reflects adaptive changes in the integration of visual, vestibular, and proprioceptive cues. Such an adaptation in the control of position and heading direction during locomotion due to the congruence of sensory information demonstrates the potential for adaptive transfer between sensorimotor systems and suggests a common neural site for the processing and self-motion perception and concurrent adaptation in motor output. This will result in lack of subjects perception of deviation of position and trajectory during the post treadmill step test while blind folded.

  16. Podokinetic Stimulation Causes Shifts in Perception of Straight Ahead

    PubMed Central

    Scott, John T.; Lohnes, Corey A.; Horak, Fay B.; Earhart, Gammon M.

    2011-01-01

    Podokinetic after-rotation (PKAR) is a phenomenon in which subjects inadvertently rotate when instructed to step in place after a period of walking on a rotating treadmill. PKAR has been shown to transfer between different forms of locomotion, but has not been tested in a non-locomotor task. We conducted two experiments to assess effects of PKAR on perception of subjective straight ahead and on quiet standing posture. Twenty-one healthy young right-handed subjects pointed to what they perceived as their subjective straight ahead with a laser pointer while they were recorded by a motion capture system both before and after a training period on the rotating treadmill. Subjects performed the pointing task while standing, sitting on a chair without a back, and a chair with a back. After the training period, subjects demonstrated a significant shift in subjective straight ahead, pointing an average of 29.1 ± 10.6 degrees off of center. The effect was direction-specific, depending on whether subjects had trained in the clockwise or counter-clockwise direction. Postures that limited subjects’ ability to rotate the body in space resulted in reduction, but not elimination, of the effect. The effect was present in quiet standing and even in sitting postures where locomotion was not possible. The robust transfer of PKAR to non-locomotor tasks, and across locomotor forms as demonstrated previously, is in contrast to split-belt adaptations that show limited transfer. We propose that, unlike split-belt adaptations, podokinetic adaptations are mediated at supraspinal, spatial orientation areas that influences spinal-level circuits for locomotion. PMID:21076818

  17. Study of modeling and evaluation of remote manipulation tasks with force feedback

    NASA Technical Reports Server (NTRS)

    Hill, J. W.

    1979-01-01

    The use of time and motion study methods to evaluate force feedback in remote manipulation tasks are described. Several systems of time measurement derived for industrial workers were studied and adapted for manipulator use. A task board incorporating a set of basic motions was designed and built. Results obtained from two subjects in three manipulation situations for each are reported: a force-reflective manipulator, a unilateral manipulator, and the unaided human hand. The results indicate that: (1) a time-and-motion study techniques are applicable to manipulation; and that (2) force feedback facilitates some motions (notably fitting), but not others (such as positioning).

  18. Moving from spatially segregated to transparent motion: a modelling approach

    PubMed Central

    Durant, Szonya; Donoso-Barrera, Alejandra; Tan, Sovira; Johnston, Alan

    2005-01-01

    Motion transparency, in which patterns of moving elements group together to give the impression of lacy overlapping surfaces, provides an important challenge to models of motion perception. It has been suggested that we perceive transparent motion when the shape of the velocity histogram of the stimulus is bimodal. To investigate this further, random-dot kinematogram motion sequences were created to simulate segregated (perceptually spatially separated) and transparent (perceptually overlapping) motion. The motion sequences were analysed using the multi-channel gradient model (McGM) to obtain the speed and direction at every pixel of each frame of the motion sequences. The velocity histograms obtained were found to be quantitatively similar and all were bimodal. However, the spatial and temporal properties of the velocity field differed between segregated and transparent stimuli. Transparent stimuli produced patches of rightward and leftward motion that varied in location over time. This demonstrates that we can successfully differentiate between these two types of motion on the basis of the time varying local velocity field. However, the percept of motion transparency cannot be based simply on the presence of a bimodal velocity histogram. PMID:17148338

  19. The unity assumption facilitates cross-modal binding of musical, non-speech stimuli: The role of spectral and amplitude envelope cues.

    PubMed

    Chuen, Lorraine; Schutz, Michael

    2016-07-01

    An observer's inference that multimodal signals originate from a common underlying source facilitates cross-modal binding. This 'unity assumption' causes asynchronous auditory and visual speech streams to seem simultaneous (Vatakis & Spence, Perception & Psychophysics, 69(5), 744-756, 2007). Subsequent tests of non-speech stimuli such as musical and impact events found no evidence for the unity assumption, suggesting the effect is speech-specific (Vatakis & Spence, Acta Psychologica, 127(1), 12-23, 2008). However, the role of amplitude envelope (the changes in energy of a sound over time) was not previously appreciated within this paradigm. Here, we explore whether previous findings suggesting speech-specificity of the unity assumption were confounded by similarities in the amplitude envelopes of the contrasted auditory stimuli. Experiment 1 used natural events with clearly differentiated envelopes: single notes played on either a cello (bowing motion) or marimba (striking motion). Participants performed an un-speeded temporal order judgments task; viewing audio-visually matched (e.g., marimba auditory with marimba video) and mismatched (e.g., cello auditory with marimba video) versions of stimuli at various stimulus onset asynchronies, and were required to indicate which modality was presented first. As predicted, participants were less sensitive to temporal order in matched conditions, demonstrating that the unity assumption can facilitate the perception of synchrony outside of speech stimuli. Results from Experiments 2 and 3 revealed that when spectral information was removed from the original auditory stimuli, amplitude envelope alone could not facilitate the influence of audiovisual unity. We propose that both amplitude envelope and spectral acoustic cues affect the percept of audiovisual unity, working in concert to help an observer determine when to integrate across modalities.

  20. The predictive value of general movement tasks in assessing occupational task performance.

    PubMed

    Frost, David M; Beach, Tyson A C; McGill, Stuart M; Callaghan, Jack P

    2015-01-01

    Within the context of evaluating individuals' movement behavior it is generally assumed that the tasks chosen will predict their competency to perform activities relevant to their occupation. This study sought to examine whether a battery of general tasks could be used to predict the movement patterns employed by firefighters to perform select job-specific skills. Fifty-two firefighters performed a battery of general and occupation-specific tasks that simulated the demands of firefighting. Participants' peak lumbar spine and frontal plane knee motion were compared across tasks. During 85% of all comparisons, the magnitude of spine and knee motion was greater during the general movement tasks than observed during the firefighting skills. Certain features of a worker's movement behavior may be exhibited across a range of tasks. Therefore, provided that a movement screen's tasks expose the motions of relevance for the population being tested, general evaluations could offer valuable insight into workers' movement competency or facilitate an opportunity to establish an evidence-informed intervention.

  1. Increase in MST activity correlates with visual motion learning: A functional MRI study of perceptual learning

    PubMed Central

    Larcombe, Stephanie J.; Kennard, Chris

    2017-01-01

    Abstract Repeated practice of a specific task can improve visual performance, but the neural mechanisms underlying this improvement in performance are not yet well understood. Here we trained healthy participants on a visual motion task daily for 5 days in one visual hemifield. Before and after training, we used functional magnetic resonance imaging (fMRI) to measure the change in neural activity. We also imaged a control group of participants on two occasions who did not receive any task training. While in the MRI scanner, all participants completed the motion task in the trained and untrained visual hemifields separately. Following training, participants improved their ability to discriminate motion direction in the trained hemifield and, to a lesser extent, in the untrained hemifield. The amount of task learning correlated positively with the change in activity in the medial superior temporal (MST) area. MST is the anterior portion of the human motion complex (hMT+). MST changes were localized to the hemisphere contralateral to the region of the visual field, where perceptual training was delivered. Visual areas V2 and V3a showed an increase in activity between the first and second scan in the training group, but this was not correlated with performance. The contralateral anterior hippocampus and bilateral dorsolateral prefrontal cortex (DLPFC) and frontal pole showed changes in neural activity that also correlated with the amount of task learning. These findings emphasize the importance of MST in perceptual learning of a visual motion task. Hum Brain Mapp 39:145–156, 2018. © 2017 Wiley Periodicals, Inc. PMID:28963815

  2. Multidigit movement synergies of the human hand in an unconstrained haptic exploration task.

    PubMed

    Thakur, Pramodsingh H; Bastian, Amy J; Hsiao, Steven S

    2008-02-06

    Although the human hand has a complex structure with many individual degrees of freedom, joint movements are correlated. Studies involving simple tasks (grasping) or skilled tasks (typing or finger spelling) have shown that a small number of combined joint motions (i.e., synergies) can account for most of the variance in observed hand postures. However, those paradigms evoked a limited set of hand postures and as such the reported correlation patterns of joint motions may be task-specific. Here, we used an unconstrained haptic exploration task to evoke a set of hand postures that is representative of most naturalistic postures during object manipulation. Principal component analysis on this set revealed that the first seven principal components capture >90% of the observed variance in hand postures. Further, we identified nine eigenvectors (or synergies) that are remarkably similar across multiple subjects and across manipulations of different sets of objects within a subject. We then determined that these synergies are used broadly by showing that they account for the changes in hand postures during other tasks. These include hand motions such as reach and grasp of objects that vary in width, curvature and angle, and skilled motions such as precision pinch. Our results demonstrate that the synergies reported here generalize across tasks, and suggest that they represent basic building blocks underlying natural human hand motions.

  3. Cross-Cultural Study of Special Education Teachers' Perception of Iconicity of Graphic Symbols for Emotions

    ERIC Educational Resources Information Center

    Chae, Soo Jung

    2011-01-01

    This study was to investigate whether there are differences in perception of the symbols representing six emotions between the Korean and the American teachers. For an accurate comparison, two transparency tasks (Task 1-1 and Task 2) and one translucency task (Task 3) were used to investigate differences between Korean and American special…

  4. Recovery of biological motion perception and network plasticity after cerebellar tumor removal.

    PubMed

    Sokolov, Arseny A; Erb, Michael; Grodd, Wolfgang; Tatagiba, Marcos S; Frackowiak, Richard S J; Pavlova, Marina A

    2014-10-01

    Visual perception of body motion is vital for everyday activities such as social interaction, motor learning or car driving. Tumors to the left lateral cerebellum impair visual perception of body motion. However, compensatory potential after cerebellar damage and underlying neural mechanisms remain unknown. In the present study, visual sensitivity to point-light body motion was psychophysically assessed in patient SL with dysplastic gangliocytoma (Lhermitte-Duclos disease) to the left cerebellum before and after neurosurgery, and in a group of healthy matched controls. Brain activity during processing of body motion was assessed by functional magnetic resonance imaging (MRI). Alterations in underlying cerebro-cerebellar circuitry were studied by psychophysiological interaction (PPI) analysis. Visual sensitivity to body motion in patient SL before neurosurgery was substantially lower than in controls, with significant improvement after neurosurgery. Functional MRI in patient SL revealed a similar pattern of cerebellar activation during biological motion processing as in healthy participants, but located more medially, in the left cerebellar lobules III and IX. As in normalcy, PPI analysis showed cerebellar communication with a region in the superior temporal sulcus, but located more anteriorly. The findings demonstrate a potential for recovery of visual body motion processing after cerebellar damage, likely mediated by topographic shifts within the corresponding cerebro-cerebellar circuitry induced by cerebellar reorganization. The outcome is of importance for further understanding of cerebellar plasticity and neural circuits underpinning visual social cognition.

  5. The Relative Importance of Spatial Versus Temporal Structure in the Perception of Biological Motion: An Event-Related Potential Study

    ERIC Educational Resources Information Center

    Hirai, Masahiro; Hiraki, Kazuo

    2006-01-01

    We investigated how the spatiotemporal structure of animations of biological motion (BM) affects brain activity. We measured event-related potentials (ERPs) during the perception of BM under four conditions: normal spatial and temporal structure; scrambled spatial and normal temporal structure; normal spatial and scrambled temporal structure; and…

  6. The effects of motion and g-seat cues on pilot simulator performance of three piloting tasks

    NASA Technical Reports Server (NTRS)

    Showalter, T. W.; Parris, B. L.

    1980-01-01

    Data are presented that show the effects of motion system cues, g-seat cues, and pilot experience on pilot performance during takeoffs with engine failures, during in-flight precision turns, and during landings with wind shear. Eight groups of USAF pilots flew a simulated KC-135 using four different cueing systems. The basic cueing system was a fixed-base type (no-motion cueing) with visual cueing. The other three systems were produced by the presence of either a motion system or a g-seat, or both. Extensive statistical analysis of the data was performed and representative performance means were examined. These data show that the addition of motion system cueing results in significant improvement in pilot performance for all three tasks; however, the use of g-seat cueing, either alone or in conjunction with the motion system, provides little if any performance improvement for these tasks and for this aircraft type.

  7. Vertical Axis Rotational Motion Cues in Hovering Flight Simulation

    NASA Technical Reports Server (NTRS)

    Schroeder, Jeffrey A.; Johnson, Walter W.; Showman, Robert D. (Technical Monitor)

    1994-01-01

    A previous study that examined how yaw motion affected a pilot's ability to perform realistic hovering flight tasks indicated that any amount of pure yaw motion had little-to-no effect on pilot performance or opinion. In that experiment, pilots were located at the vehicle's center of rotation; thus lateral or longitudinal accelerations were absent. The purpose of the new study described here was to investigate further these unanticipated results for additional flight tasks, but with the introduction of linear accelerations associated with yaw rotations when the pilot is not at the center of rotation. The question of whether a yaw motion degree-of-freedom is necessary or not is important to government regulators who specify what simulator motions are necessary according to prescribed levels of simulator sophistication. Currently, specifies two levels of motion sophistication for flight simulators: full 6-degree-of-freedom and 3-degree-of-freedom. For the less sophisticated simulator, the assumed three degrees of freedom are pitch, roll, and heave. If other degrees of freedom are selected, which are different f rom these three, they must be qualified on a case-by-case basis. Picking the assumed three axes is reasonable and based upon experience, but little empirical data are available to support the selection of critical axes. Thus, the research described here is aimed at answering this question. The yaw and lateral degrees of freedom were selected to be examined first, and maneuvers were defined to uncouple these motions from changes in the gravity vector with respect to the pilot. This approach simplifies the problem to be examined. For this experiment, the NASA Ames Vertical Motion Simulator was used in a comprehensive investigation. The math model was an AH-64 Apache in hover, which was identified from flight test data and had previously been validated by several AH-64 pilots. The pilot's head was located 4.5 ft in front of the vehicle center of gravity, which is representative of the AH-64 pilot location. Six test pilots flew three tasks that were specifically designed to represent a broad class of situations in which both lateral and yaw motion cues may be useful. For the first task, the pilot controlled only the yaw axis and was required to rapidly acquire a North heading from 15 deg yaw offsets to either the East or West. This task allowed for full, or 1:1, motion to be used in all axes (yaw, lateral, and longitudinal). The second task was a 10 sec., 180 deg. pedal turn over a runway, but with the pilot only controlling the yaw degree-of-freedom. The position of the vehicle's center-of-mass remained fixed. This maneuver was taken from a current U.S. Army rotary wing design standard5 and is representative of a maneuver performed for acceptance of military helicopters; however, it does not allow for full 1:1 motion, since the simulator cab cannot rotate 180 deg. The third task required the pilot to perform a rapid 9 ft climb at a constant heading. This task was challenging, because rapid collective lever movement in the unaugmented AH64 results in a substantial yawing moment (due to engine torque) that must be countered by the pilot. This task also had full motion in all axes, but, in this case, the pilot had two axes to control simultaneously, rather than one as in the previous tasks. Four motion configurations were examined for each task: full motion (except for the 180 deg turn, for which the motion system was configured to provide as much motion as possible), full linear with no yaw motion, full yaw with no linear motion, and no motion. Each configuration was flown four times in a randomized test matrix, and the pilots were not informed of the configuration given. Vehicle state data were recorded for objective performance comparisons, and pilots provided subjective comments and ratings. As part of the pilots' evaluation, they were asked to rate the compensation required, the overall fidelity of the motion as compared to real flight, and whether motion was detected or not in each of the six degrees of freedom. In addition, the pilots provided a numerical level-of confidence rating, between 1 and 7, corresponding to how sure they were whether or not motion was present in each degree-of-freedom. The latter rating allow classical signal detection analysis to be performed.

  8. Can walking motions improve visually induced rotational self-motion illusions in virtual reality?

    PubMed

    Riecke, Bernhard E; Freiberg, Jacob B; Grechkin, Timofey Y

    2015-02-04

    Illusions of self-motion (vection) can provide compelling sensations of moving through virtual environments without the need for complex motion simulators or large tracked physical walking spaces. Here we explore the interaction between biomechanical cues (stepping along a rotating circular treadmill) and visual cues (viewing simulated self-rotation) for providing stationary users a compelling sensation of rotational self-motion (circular vection). When tested individually, biomechanical and visual cues were similarly effective in eliciting self-motion illusions. However, in combination they yielded significantly more intense self-motion illusions. These findings provide the first compelling evidence that walking motions can be used to significantly enhance visually induced rotational self-motion perception in virtual environments (and vice versa) without having to provide for physical self-motion or motion platforms. This is noteworthy, as linear treadmills have been found to actually impair visually induced translational self-motion perception (Ash, Palmisano, Apthorp, & Allison, 2013). Given the predominant focus on linear walking interfaces for virtual-reality locomotion, our findings suggest that investigating circular and curvilinear walking interfaces offers a promising direction for future research and development and can help to enhance self-motion illusions, presence and immersion in virtual-reality systems. © 2015 ARVO.

  9. Integration of local motion is normal in amblyopia

    NASA Astrophysics Data System (ADS)

    Hess, Robert F.; Mansouri, Behzad; Dakin, Steven C.; Allen, Harriet A.

    2006-05-01

    We investigate the global integration of local motion direction signals in amblyopia, in a task where performance is equated between normal and amblyopic eyes at the single element level. We use an equivalent noise model to derive the parameters of internal noise and number of samples, both of which we show are normal in amblyopia for this task. This result is in apparent conflict with a previous study in amblyopes showing that global motion processing is defective in global coherence tasks [Vision Res. 43, 729 (2003)]. A similar discrepancy between the normalcy of signal integration [Vision Res. 44, 2955 (2004)] and anomalous global coherence form processing has also been reported [Vision Res. 45, 449 (2005)]. We suggest that these discrepancies for form and motion processing in amblyopia point to a selective problem in separating signal from noise in the typical global coherence task.

  10. Action and emotion recognition from point light displays: an investigation of gender differences.

    PubMed

    Alaerts, Kaat; Nackaerts, Evelien; Meyns, Pieter; Swinnen, Stephan P; Wenderoth, Nicole

    2011-01-01

    Folk psychology advocates the existence of gender differences in socio-cognitive functions such as 'reading' the mental states of others or discerning subtle differences in body-language. A female advantage has been demonstrated for emotion recognition from facial expressions, but virtually nothing is known about gender differences in recognizing bodily stimuli or body language. The aim of the present study was to investigate potential gender differences in a series of tasks, involving the recognition of distinct features from point light displays (PLDs) depicting bodily movements of a male and female actor. Although recognition scores were considerably high at the overall group level, female participants were more accurate than males in recognizing the depicted actions from PLDs. Response times were significantly higher for males compared to females on PLD recognition tasks involving (i) the general recognition of 'biological motion' versus 'non-biological' (or 'scrambled' motion); or (ii) the recognition of the 'emotional state' of the PLD-figures. No gender differences were revealed for a control test (involving the identification of a color change in one of the dots) and for recognizing the gender of the PLD-figure. In addition, previous findings of a female advantage on a facial emotion recognition test (the 'Reading the Mind in the Eyes Test' (Baron-Cohen, 2001)) were replicated in this study. Interestingly, a strong correlation was revealed between emotion recognition from bodily PLDs versus facial cues. This relationship indicates that inter-individual or gender-dependent differences in recognizing emotions are relatively generalized across facial and bodily emotion perception. Moreover, the tight correlation between a subject's ability to discern subtle emotional cues from PLDs and the subject's ability to basically discriminate biological from non-biological motion provides indications that differences in emotion recognition may - at least to some degree - be related to more basic differences in processing biological motion per se.

  11. Motion interactive video games in home training for children with cerebral palsy: parents' perceptions.

    PubMed

    Sandlund, Marlene; Dock, Katarina; Häger, Charlotte K; Waterworth, Eva Lindh

    2012-01-01

    To explore parents' perceptions of using low-cost motion interactive video games as home training for their children with mild/moderate cerebral palsy. Semi-structured interviews were carried out with parents from 15 families after participation in an intervention where motion interactive games were used daily in home training for their child. A qualitative content analysis approach was applied. The parents' perception of the training was very positive. They expressed the view that motion interactive video games may promote positive experiences of physical training in rehabilitation, where the social aspects of gaming were especially valued. Further, the parents experienced less need to take on coaching while gaming stimulated independent training. However, there was a desire for more controlled and individualized games to better challenge the specific rehabilitative need of each child. Low-cost motion interactive games may provide increased motivation and social interaction to home training and promote independent training with reduced coaching efforts for the parents. In future designs of interactive games for rehabilitation purposes, it is important to preserve the motivational and social features of games while optimizing the individualized physical exercise.

  12. Self Motion Perception and Motion Sickness

    NASA Technical Reports Server (NTRS)

    Fox, Robert A. (Principal Investigator)

    1991-01-01

    The studies conducted in this research project examined several aspects of motion sickness in animal models. A principle objective of these studies was to investigate the neuroanatomy that is important in motion sickness with the objectives of examining both the utility of putative models and defining neural mechanisms that are important in motion sickness.

  13. Motion makes sense: an adaptive motor-sensory strategy underlies the perception of object location in rats.

    PubMed

    Saraf-Sinik, Inbar; Assa, Eldad; Ahissar, Ehud

    2015-06-10

    Tactile perception is obtained by coordinated motor-sensory processes. We studied the processes underlying the perception of object location in freely moving rats. We trained rats to identify the relative location of two vertical poles placed in front of them and measured at high resolution the motor and sensory variables (19 and 2 variables, respectively) associated with this whiskers-based perceptual process. We found that the rats developed stereotypic head and whisker movements to solve this task, in a manner that can be described by several distinct behavioral phases. During two of these phases, the rats' whiskers coded object position by first temporal and then angular coding schemes. We then introduced wind (in two opposite directions) and remeasured their perceptual performance and motor-sensory variables. Our rats continued to perceive object location in a consistent manner under wind perturbations while maintaining all behavioral phases and relatively constant sensory coding. Constant sensory coding was achieved by keeping one group of motor variables (the "controlled variables") constant, despite the perturbing wind, at the cost of strongly modulating another group of motor variables (the "modulated variables"). The controlled variables included coding-relevant variables, such as head azimuth and whisker velocity. These results indicate that consistent perception of location in the rat is obtained actively, via a selective control of perception-relevant motor variables. Copyright © 2015 the authors 0270-6474/15/358777-13$15.00/0.

  14. On the Visual Input Driving Human Smooth-Pursuit Eye Movements

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Beutter, Brent R.; Lorenceau, Jean

    1996-01-01

    Current computational models of smooth-pursuit eye movements assume that the primary visual input is local retinal-image motion (often referred to as retinal slip). However, we show that humans can pursue object motion with considerable accuracy, even in the presence of conflicting local image motion. This finding indicates that the visual cortical area(s) controlling pursuit must be able to perform a spatio-temporal integration of local image motion into a signal related to object motion. We also provide evidence that the object-motion signal that drives pursuit is related to the signal that supports perception. We conclude that current models of pursuit should be modified to include a visual input that encodes perceived object motion and not merely retinal image motion. Finally, our findings suggest that the measurement of eye movements can be used to monitor visual perception, with particular value in applied settings as this non-intrusive approach would not require interrupting ongoing work or training.

  15. Perception of difficulty and glucose control: Effects on academic performance in youth with type I diabetes.

    PubMed

    Potts, Tiffany M; Nguyen, Jacqueline L; Ghai, Kanika; Li, Kathy; Perlmuter, Lawrence

    2015-04-15

    To investigate whether perceptions of task difficulty on neuropsychological tests predicted academic achievement after controlling for glucose levels and depression. Participants were type 1 diabetic adolescents, with a mean age = 12.5 years (23 females and 16 males), seen at a northwest suburban Chicago hospital. The sample population was free of co-morbid clinical health conditions. Subjects completed a three-part neuropsychological battery including the Digit Symbol Task, Trail Making Test, and Controlled Oral Word Association test. Following each task, individuals rated task difficulty and then completed a depression inventory. Performance on these three tests is reflective of neuropsychological status in relation to glucose control. Blood glucose levels were measured immediately prior to and after completing the neuropsychological battery using a glucose meter. HbA1c levels were obtained from medical records. Academic performance was based on self-reported grades in Math, Science, and English. Data was analyzed using multiple regression models to evaluate the associations between academic performance, perception of task difficulty, and glucose control. Perceptions of difficulty on a neuropsychological battery significantly predicted academic performance after accounting for glucose control and depression. Perceptions of difficulty on the neuropsychological tests were inversely correlated with academic performance (r = -0.48), while acute (blood glucose) and long-term glucose levels increased along with perceptions of task difficulty (r = 0.47). Additionally, higher depression scores were associated with poorer academic performance (r = -0.43). With the first regression analysis, perception of difficulty on the neuropsychological tasks contributed to 8% of the variance in academic performance after controlling for peripheral blood glucose and depression. In the second regression analysis, perception of difficulty accounted for 11% of the variance after accounting for academic performance and depression. The final regression analysis indicated that perception of difficulty increased with peripheral blood glucose, contributing to 22% of the variance. Most importantly, after controlling for perceptions of task difficulty, academic performance no longer predicted glucose levels. Finally, subjects who found the cognitive battery difficult were likely to have poor academic grades. Perceptions of difficulty on neurological tests exhibited a significant association with academic achievement, indicating that deficits in this skill may lead to academic disadvantage in diabetic patients.

  16. Visual Depth from Motion Parallax and Eye Pursuit

    PubMed Central

    Stroyan, Keith; Nawrot, Mark

    2012-01-01

    A translating observer viewing a rigid environment experiences “motion parallax,” the relative movement upon the observer’s retina of variously positioned objects in the scene. This retinal movement of images provides a cue to the relative depth of objects in the environment, however retinal motion alone cannot mathematically determine relative depth of the objects. Visual perception of depth from lateral observer translation uses both retinal image motion and eye movement. In (Nawrot & Stroyan, 2009, Vision Res. 49, p.1969) we showed mathematically that the ratio of the rate of retinal motion over the rate of smooth eye pursuit mathematically determines depth relative to the fixation point in central vision. We also reported on psychophysical experiments indicating that this ratio is the important quantity for perception. Here we analyze the motion/pursuit cue for the more general, and more complicated, case when objects are distributed across the horizontal viewing plane beyond central vision. We show how the mathematical motion/pursuit cue varies with different points across the plane and with time as an observer translates. If the time varying retinal motion and smooth eye pursuit are the only signals used for this visual process, it is important to know what is mathematically possible to derive about depth and structure. Our analysis shows that the motion/pursuit ratio determines an excellent description of depth and structure in these broader stimulus conditions, provides a detailed quantitative hypothesis of these visual processes for the perception of depth and structure from motion parallax, and provides a computational foundation to analyze the dynamic geometry of future experiments. PMID:21695531

  17. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction

    PubMed Central

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research. PMID:29599739

  18. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction.

    PubMed

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.

  19. Finger Interdependence: Linking the Kinetic and Kinematic Variables

    PubMed Central

    Kim, Sun Wook; Shim, Jae Kun; Zatsiorsky, Vladimir M.; Latash, Mark L.

    2008-01-01

    We studied the dependence between voluntary motion of a finger and pressing forces produced by the tips of other fingers of the hand. Subjects moved one of the fingers (task finger) of the right hand trying to follow a cyclic, ramp-like flexion-extension template at different frequencies. The other fingers (slave fingers) were restricted from moving; their flexion forces were recorded and analyzed. Index finger motion caused the smallest force production by the slave fingers. Larger forces were produced by the neighbors of the task finger; these forces showed strong modulation over the range of motion of the task finger. The enslaved forces were higher during the flexion phase of the movement cycle as compared to the extension phase. The index of enslaving expressed in N/rad was higher when the task finger moved through the more flexed postures. The dependence of enslaving on both range and direction of task finger motion poses problems for methods of analysis of finger coordination based on an assumption of universal matrices of finger inter-dependence. PMID:18255182

  20. Investigating the Interactions among Genre, Task Complexity, and Proficiency in L2 Writing: A Comprehensive Text Analysis and Study of Learner Perceptions

    ERIC Educational Resources Information Center

    Yoon, Hyung-Jo

    2017-01-01

    In this study, I explored the interactions among genre, task complexity, and L2 proficiency in learners' writing task performance. Specifically, after identifying the lack of valid operationalizations of genre and task dimensions in L2 writing research, I examined how genre functions as a task complexity variable, and how learners' perceptions and…

  1. Redundant arm control in a supervisory and shared control system

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.; Long, Mark K.

    1992-01-01

    The Extended Task Space Control approach to robotic operations based on manipulator behaviors derived from task requirements is described. No differentiation between redundant and non-redundant robots is made at the task level. The manipulation task behaviors are combined into a single set of motion commands. The manipulator kinematics are used subsequently in mapping motion commands into actuator commands. Extended Task Space Control is applied to a Robotics Research K-1207 seven degree-of-freedom manipulator in a supervisory telerobot system as an example.

  2. Motion perception: behavior and neural substrate.

    PubMed

    Mather, George

    2011-05-01

    Visual motion perception is vital for survival. Single-unit recordings in primate primary visual cortex (V1) have revealed the existence of specialized motion sensing neurons; perceptual effects such as the motion after-effect demonstrate their importance for motion perception. Human psychophysical data on motion detection can be explained by a computational model of cortical motion sensors. Both psychophysical and physiological data reveal at least two classes of motion sensor capable of sensing motion in luminance-defined and texture-defined patterns, respectively. Psychophysical experiments also reveal that motion can be seen independently of motion sensor output, based on attentive tracking of visual features. Sensor outputs are inherently ambiguous, due to the problem of univariance in neural responses. In order to compute stimulus direction and speed, the visual system must compare the responses of many different sensors sensitive to different directions and speeds. Physiological data show that this computation occurs in the visual middle temporal (MT) area. Recent psychophysical studies indicate that information about spatial form may also play a role in motion computations. Adaptation studies show that the human visual system is selectively sensitive to large-scale optic flow patterns, and physiological studies indicate that cells in the middle superior temporal (MST) area derive this sensitivity from the combined responses of many MT cells. Extraretinal signals used to control eye movements are an important source of signals to cancel out the retinal motion responses generated by eye movements, though visual information also plays a role. A number of issues remain to be resolved at all levels of the motion-processing hierarchy. WIREs Cogni Sci 2011 2 305-314 DOI: 10.1002/wcs.110 For further resources related to this article, please visit the WIREs website Additional Supporting Information may be found in http://www.lifesci.sussex.ac.uk/home/George_Mather/Motion/index.html. Copyright © 2010 John Wiley & Sons, Ltd.

  3. EFFECTS OF STROBOSCOPIC VISUAL TRAINING ON VISUAL ATTENTION, MOTION PERCEPTION, AND CATCHING PERFORMANCE.

    PubMed

    Wilkins, Luke; Gray, Rob

    2015-08-01

    It has been shown recently that stroboscopic visual training can improve visual-perceptual abilities, such as central field motion sensitivity and anticipatory timing. Such training should also improve a sports skill that relies on these perceptual abilities, namely ball catching. Thirty athletes (12 women, 18 men; M age=22.5 yr., SD=4.7) were assigned to one of two types of stroboscopic training groups: a variable strobe rate (VSR) group for which the off-time of the glasses was systematically increased (as in previous research) and a constant strobe rate group (CSR) for which the glasses were always set at the shortest off-time. Training involved simple, tennis ball-catching drills (9×20 min.) occurring over a 6-wk. In pre- and post-training, the participants completed a one-handed ball-catching task and the Useful Field of View (UFOV) and the Motion in Depth Sensitivity (MIDS) tests. Since the CSR condition used in the present study has been shown to have no effect on catching performance, it was predicted that the VSR group would show significantly greater improvement pre-post-training. There were no significant differences between the CSR and VSR on any of the tests. However, changes in catching performance (total balls caught) pre-post-training were significantly correlated with changes in scores for the UFOV single-task and MIDS tests. That is, regardless of group, participants whose perceptual-cognitive performance improved in the post-test were significantly more likely to improve their catching performance. This suggests that the perceptual changes observed in previous stroboscopic training studies may be linked to changes in sports skill performance.

  4. Rocking or Rolling – Perception of Ambiguous Motion after Returning from Space

    PubMed Central

    Clément, Gilles; Wood, Scott J.

    2014-01-01

    The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Adaptive changes during spaceflight in how the brain integrates vestibular cues with other sensory information can lead to impaired movement coordination, vertigo, spatial disorientation, and perceptual illusions after return to Earth. The purpose of this study was to compare tilt and translation motion perception in astronauts before and after returning from spaceflight. We hypothesized that these stimuli would be the most ambiguous in the low-frequency range (i.e., at about 0.3 Hz) where the linear acceleration can be interpreted either as a translation or as a tilt relative to gravity. Verbal reports were obtained in eleven astronauts tested using a motion-based tilt-translation device and a variable radius centrifuge before and after flying for two weeks on board the Space Shuttle. Consistent with previous studies, roll tilt perception was overestimated shortly after spaceflight and then recovered with 1–2 days. During dynamic linear acceleration (0.15–0.6 Hz, ±1.7 m/s2) perception of translation was also overestimated immediately after flight. Recovery to baseline was observed after 2 days for lateral translation and 8 days for fore–aft translation. These results suggest that there was a shift in the frequency dynamic of tilt-translation motion perception after adaptation to weightlessness. These results have implications for manual control during landing of a space vehicle after exposure to microgravity, as it will be the case for human asteroid and Mars missions. PMID:25354042

  5. Rocking or rolling--perception of ambiguous motion after returning from space.

    PubMed

    Clément, Gilles; Wood, Scott J

    2014-01-01

    The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Adaptive changes during spaceflight in how the brain integrates vestibular cues with other sensory information can lead to impaired movement coordination, vertigo, spatial disorientation, and perceptual illusions after return to Earth. The purpose of this study was to compare tilt and translation motion perception in astronauts before and after returning from spaceflight. We hypothesized that these stimuli would be the most ambiguous in the low-frequency range (i.e., at about 0.3 Hz) where the linear acceleration can be interpreted either as a translation or as a tilt relative to gravity. Verbal reports were obtained in eleven astronauts tested using a motion-based tilt-translation device and a variable radius centrifuge before and after flying for two weeks on board the Space Shuttle. Consistent with previous studies, roll tilt perception was overestimated shortly after spaceflight and then recovered with 1-2 days. During dynamic linear acceleration (0.15-0.6 Hz, ±1.7 m/s2) perception of translation was also overestimated immediately after flight. Recovery to baseline was observed after 2 days for lateral translation and 8 days for fore-aft translation. These results suggest that there was a shift in the frequency dynamic of tilt-translation motion perception after adaptation to weightlessness. These results have implications for manual control during landing of a space vehicle after exposure to microgravity, as it will be the case for human asteroid and Mars missions.

  6. Video segmentation and camera motion characterization using compressed data

    NASA Astrophysics Data System (ADS)

    Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain

    1997-10-01

    We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.

  7. The Relationship Between Speech Production and Speech Perception Deficits in Parkinson's Disease.

    PubMed

    De Keyser, Kim; Santens, Patrick; Bockstael, Annelies; Botteldooren, Dick; Talsma, Durk; De Vos, Stefanie; Van Cauwenberghe, Mieke; Verheugen, Femke; Corthals, Paul; De Letter, Miet

    2016-10-01

    This study investigated the possible relationship between hypokinetic speech production and speech intensity perception in patients with Parkinson's disease (PD). Participants included 14 patients with idiopathic PD and 14 matched healthy controls (HCs) with normal hearing and cognition. First, speech production was objectified through a standardized speech intelligibility assessment, acoustic analysis, and speech intensity measurements. Second, an overall estimation task and an intensity estimation task were addressed to evaluate overall speech perception and speech intensity perception, respectively. Finally, correlation analysis was performed between the speech characteristics of the overall estimation task and the corresponding acoustic analysis. The interaction between speech production and speech intensity perception was investigated by an intensity imitation task. Acoustic analysis and speech intensity measurements demonstrated significant differences in speech production between patients with PD and the HCs. A different pattern in the auditory perception of speech and speech intensity was found in the PD group. Auditory perceptual deficits may influence speech production in patients with PD. The present results suggest a disturbed auditory perception related to an automatic monitoring deficit in PD.

  8. Variations in the perceptions of peer and coach motivational climate.

    PubMed

    Vazou, Spiridoula

    2010-06-01

    This study examined (a) variations in the perceptions of peer- and coach-generated motivational climate within and between teams and (b) individual- and group-level factors that can account for these variations. Participants were 483 athletes between 12 and 16 years old. The results showed that perceptions of both peer- and coach-generated climate varied as a function of group-level variables, namely team success, coach's gender (except for peer ego-involving climate), and team type (only for coach ego-involving climate). Perceptions of peer- and coach-generated climate also varied as a function of individual-level variables, namely athletes' task and ego orientations, gender, and age (only for coach task-involving and peer ego-involving climate). Moreover, within-team variations in perceptions of peer- and coach-generated climate as a function of task and ego orientation levels were identified. Identifying and controlling the factors that influence perceptions of peer- and coach-generated climate may be important in strengthening task-involving motivational cues.

  9. What Influences Principals' Perceptions of Academic Climate? A Nationally Representative Study of the Direct Effects of Perception on Climate

    ERIC Educational Resources Information Center

    Urick, Angela; Bowers, Alex J.

    2011-01-01

    Using a nationally representative sample of public high schools (N = 439), we examined the extent to which the principal's perception of their influence over instruction, the evaluation of nonacademic related tasks as well as academic related tasks, and their relationship with the school district relates to their perception of academic climate…

  10. Self-motion perception compresses time experienced in return travel.

    PubMed

    Seno, Takeharu; Ito, Hiroyuki; Shoji, Sunaga

    2011-01-01

    It is often anecdotally reported that time experienced in return travel (back to the start point) seems shorter than time spent in outward travel (travel to a new destination). Here, we report the first experimental results showing that return travel time is experienced as shorter than the actual time. This discrepancy is induced by the existence of self-motion perception.

  11. An Assessment of the Impact of a Science Outreach Program, Science In Motion, on Student Achievement, Teacher Efficacy, and Teacher Perception

    ERIC Educational Resources Information Center

    Herring, Phillip Allen

    2009-01-01

    The purpose of the study was to analyze the science outreach program, Science In Motion (SIM), located in Mobile, Alabama. This research investigated what impact the SIM program has on student cognitive functioning and teacher efficacy and also investigated teacher perceptions and attitudes regarding the program. To investigate student…

  12. He Throws like a Girl (but Only when He's Sad): Emotion Affects Sex-Decoding of Biological Motion Displays

    ERIC Educational Resources Information Center

    Johnson, Kerri L.; McKay, Lawrie S.; Pollick, Frank E.

    2011-01-01

    Gender stereotypes have been implicated in sex-typed perceptions of facial emotion. Such interpretations were recently called into question because facial cues of emotion are confounded with sexually dimorphic facial cues. Here we examine the role of visual cues and gender stereotypes in perceptions of biological motion displays, thus overcoming…

  13. Modeling the Relationship between Perceptions of Assessment Tasks and Classroom Assessment Environment as a Function of Gender

    ERIC Educational Resources Information Center

    Alkharusi, Hussain; Aldhafri, Said; Alnabhani, Hilal; Alkalbani, Muna

    2014-01-01

    A substantial proportion of the classroom time involves exposing students to a variety of assessment tasks. As students process these tasks, they develop beliefs about the importance, utility, value, and difficulty of the tasks. This study aimed at deriving a model describing the multivariate relationship between students' perceptions of the…

  14. Increase in MST activity correlates with visual motion learning: A functional MRI study of perceptual learning.

    PubMed

    Larcombe, Stephanie J; Kennard, Chris; Bridge, Holly

    2018-01-01

    Repeated practice of a specific task can improve visual performance, but the neural mechanisms underlying this improvement in performance are not yet well understood. Here we trained healthy participants on a visual motion task daily for 5 days in one visual hemifield. Before and after training, we used functional magnetic resonance imaging (fMRI) to measure the change in neural activity. We also imaged a control group of participants on two occasions who did not receive any task training. While in the MRI scanner, all participants completed the motion task in the trained and untrained visual hemifields separately. Following training, participants improved their ability to discriminate motion direction in the trained hemifield and, to a lesser extent, in the untrained hemifield. The amount of task learning correlated positively with the change in activity in the medial superior temporal (MST) area. MST is the anterior portion of the human motion complex (hMT+). MST changes were localized to the hemisphere contralateral to the region of the visual field, where perceptual training was delivered. Visual areas V2 and V3a showed an increase in activity between the first and second scan in the training group, but this was not correlated with performance. The contralateral anterior hippocampus and bilateral dorsolateral prefrontal cortex (DLPFC) and frontal pole showed changes in neural activity that also correlated with the amount of task learning. These findings emphasize the importance of MST in perceptual learning of a visual motion task. Hum Brain Mapp 39:145-156, 2018. © 2017 Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  15. 3D surface perception from motion involves a temporal–parietal network

    PubMed Central

    Beer, Anton L.; Watanabe, Takeo; Ni, Rui; Sasaki, Yuka; Andersen, George J.

    2010-01-01

    Previous research has suggested that three-dimensional (3D) structure-from-motion (SFM) perception in humans involves several motion-sensitive occipital and parietal brain areas. By contrast, SFM perception in nonhuman primates seems to involve the temporal lobe including areas MT, MST and FST. The present functional magnetic resonance imaging study compared several motion-sensitive regions of interest including the superior temporal sulcus (STS) while human observers viewed horizontally moving dots that defined either a 3D corrugated surface or a 3D random volume. Low-level stimulus features such as dot density and velocity vectors as well as attention were tightly controlled. Consistent with previous research we found that 3D corrugated surfaces elicited stronger responses than random motion in occipital and parietal brain areas including area V3A, the ventral and dorsal intraparietal sulcus, the lateral occipital sulcus and the fusiform gyrus. Additionally, 3D corrugated surfaces elicited stronger activity in area MT and the STS but not in area MST. Brain activity in the STS but not in area MT correlated with interindividual differences in 3D surface perception. Our findings suggest that area MT is involved in the analysis of optic flow patterns such as speed gradients and that the STS in humans plays a greater role in the analysis of 3D SFM than previously thought. PMID:19674088

  16. Stereoscopic distance perception

    NASA Technical Reports Server (NTRS)

    Foley, John M.

    1989-01-01

    Limited cue, open-loop tasks in which a human observer indicates distances or relations among distances are discussed. By open-loop tasks, it is meant tasks in which the observer gets no feedback as to the accuracy of the responses. What happens when cues are added and when the loop is closed are considered. The implications of this research for the effectiveness of visual displays is discussed. Errors in visual distance tasks do not necessarily mean that the percept is in error. The error could arise in transformations that intervene between the percept and the response. It is argued that the percept is in error. It is also argued that there exist post-perceptual transformations that may contribute to the error or be modified by feedback to correct for the error.

  17. Correlated individual differences suggest a common mechanism underlying metacognition in visual perception and visual short-term memory.

    PubMed

    Samaha, Jason; Postle, Bradley R

    2017-11-29

    Adaptive behaviour depends on the ability to introspect accurately about one's own performance. Whether this metacognitive ability is supported by the same mechanisms across different tasks is unclear. We investigated the relationship between metacognition of visual perception and metacognition of visual short-term memory (VSTM). Experiments 1 and 2 required subjects to estimate the perceived or remembered orientation of a grating stimulus and rate their confidence. We observed strong positive correlations between individual differences in metacognitive accuracy between the two tasks. This relationship was not accounted for by individual differences in task performance or average confidence, and was present across two different metrics of metacognition and in both experiments. A model-based analysis of data from a third experiment showed that a cross-domain correlation only emerged when both tasks shared the same task-relevant stimulus feature. That is, metacognition for perception and VSTM were correlated when both tasks required orientation judgements, but not when the perceptual task was switched to require contrast judgements. In contrast with previous results comparing perception and long-term memory, which have largely provided evidence for domain-specific metacognitive processes, the current findings suggest that metacognition of visual perception and VSTM is supported by a domain-general metacognitive architecture, but only when both domains share the same task-relevant stimulus feature. © 2017 The Author(s).

  18. Holistic face perception is modulated by experience-dependent perceptual grouping.

    PubMed

    Curby, Kim M; Entenman, Robert J; Fleming, Justin T

    2016-07-01

    What role do general-purpose, experience-sensitive perceptual mechanisms play in producing characteristic features of face perception? We previously demonstrated that different-colored, misaligned framing backgrounds, designed to disrupt perceptual grouping of face parts appearing upon them, disrupt holistic face perception. In the current experiments, a similar part-judgment task with composite faces was performed: face parts appeared in either misaligned, different-colored rectangles or aligned, same-colored rectangles. To investigate whether experience can shape impacts of perceptual grouping on holistic face perception, a pre-task fostered the perception of either (a) the misaligned, differently colored rectangle frames as parts of a single, multicolored polygon or (b) the aligned, same-colored rectangle frames as a single square shape. Faces appearing in the misaligned, differently colored rectangles were processed more holistically by those in the polygon-, compared with the square-, pre-task group. Holistic effects for faces appearing in aligned, same-colored rectangles showed the opposite pattern. Experiment 2, which included a pre-task condition fostering the perception of the aligned, same-colored frames as pairs of independent rectangles, provided converging evidence that experience can modulate impacts of perceptual grouping on holistic face perception. These results are surprising given the proposed impenetrability of holistic face perception and provide insights into the elusive mechanisms underlying holistic perception.

  19. Visual and motion cueing in helicopter simulation

    NASA Technical Reports Server (NTRS)

    Bray, R. S.

    1985-01-01

    Early experience in fixed-cockpit simulators, with limited field of view, demonstrated the basic difficulties of simulating helicopter flight at the level of subjective fidelity required for confident evaluation of vehicle characteristics. More recent programs, utilizing large-amplitude cockpit motion and a multiwindow visual-simulation system have received a much higher degree of pilot acceptance. However, none of these simulations has presented critical visual-flight tasks that have been accepted by the pilots as the full equivalent of flight. In this paper, the visual cues presented in the simulator are compared with those of flight in an attempt to identify deficiencies that contribute significantly to these assessments. For the low-amplitude maneuvering tasks normally associated with the hover mode, the unique motion capabilities of the Vertical Motion Simulator (VMS) at Ames Research Center permit nearly a full representation of vehicle motion. Especially appreciated in these tasks are the vertical-acceleration responses to collective control. For larger-amplitude maneuvering, motion fidelity must suffer diminution through direct attenuation through high-pass filtering washout of the computer cockpit accelerations or both. Experiments were conducted in an attempt to determine the effects of these distortions on pilot performance of height-control tasks.

  20. Video quality assessment using a statistical model of human visual speed perception.

    PubMed

    Wang, Zhou; Li, Qiang

    2007-12-01

    Motion is one of the most important types of information contained in natural video, but direct use of motion information in the design of video quality assessment algorithms has not been deeply investigated. Here we propose to incorporate a recent model of human visual speed perception [Nat. Neurosci. 9, 578 (2006)] and model visual perception in an information communication framework. This allows us to estimate both the motion information content and the perceptual uncertainty in video signals. Improved video quality assessment algorithms are obtained by incorporating the model as spatiotemporal weighting factors, where the weight increases with the information content and decreases with the perceptual uncertainty. Consistent improvement over existing video quality assessment algorithms is observed in our validation with the video quality experts group Phase I test data set.

  1. Kinesthetic information disambiguates visual motion signals.

    PubMed

    Hu, Bo; Knill, David C

    2010-05-25

    Numerous studies have shown that extra-retinal signals can disambiguate motion information created by movements of the eye or head. We report a new form of cross-modal sensory integration in which the kinesthetic information generated by active hand movements essentially captures ambiguous visual motion information. Several previous studies have shown that active movement can bias observers' percepts of bi-stable stimuli; however, these effects seem to be best explained by attentional mechanisms. We show that kinesthetic information can change an otherwise stable perception of motion, providing evidence of genuine fusion between visual and kinesthetic information. The experiments take advantage of the aperture problem, in which the motion of a one-dimensional grating pattern behind an aperture, while geometrically ambiguous, appears to move stably in the grating normal direction. When actively moving the pattern, however, the observer sees the motion to be in the hand movement direction. Copyright 2010 Elsevier Ltd. All rights reserved.

  2. The Verriest Lecture: Color lessons from space, time, and motion

    PubMed Central

    Shevell, Steven K.

    2012-01-01

    The appearance of a chromatic stimulus depends on more than the wavelengths composing it. The scientific literature has countless examples showing that spatial and temporal features of light influence the colors we see. Studying chromatic stimuli that vary over space, time or direction of motion has a further benefit beyond predicting color appearance: the unveiling of otherwise concealed neural processes of color vision. Spatial or temporal stimulus variation uncovers multiple mechanisms of brightness and color perception at distinct levels of the visual pathway. Spatial variation in chromaticity and luminance can change perceived three-dimensional shape, an example of chromatic signals that affect a percept other than color. Chromatic objects in motion expose the surprisingly weak link between the chromaticity of objects and their physical direction of motion, and the role of color in inducing an illusory motion direction. Space, time and motion – color’s colleagues – reveal the richness of chromatic neural processing. PMID:22330398

  3. The effect of attention loading on the inhibition of choice reaction time to visual motion by concurrent rotary motion

    NASA Technical Reports Server (NTRS)

    Looper, M.

    1976-01-01

    This study investigates the influence of attention loading on the established intersensory effects of passive bodily rotation on choice reaction time (RT) to visual motion. Subjects sat at the center of rotation in an enclosed rotating chamber and observed an oscilloscope on which were, in the center, a tracking display and, 10 deg left of center, a RT line. Three tracking tasks and a no-tracking control condition were presented to all subjects in combination with the RT task, which occurred with and without concurrent cab rotations. Choice RT to line motions was inhibited (probability less than .001) both when there was simultaneous vestibular stimulation and when there was a tracking task; response latencies lengthened progressively with increased similarity between the RT and tracking tasks. However, the attention conditions did not affect the intersensory effect; the significance of this for the nature of the sensory interaction is discussed.

  4. Dynamic Integration of Task-Relevant Visual Features in Posterior Parietal Cortex

    PubMed Central

    Freedman, David J.

    2014-01-01

    Summary The primate visual system consists of multiple hierarchically organized cortical areas, each specialized for processing distinct aspects of the visual scene. For example, color and form are encoded in ventral pathway areas such as V4 and inferior temporal cortex, while motion is preferentially processed in dorsal pathway areas such as the middle temporal area. Such representations often need to be integrated perceptually to solve tasks which depend on multiple features. We tested the hypothesis that the lateral intraparietal area (LIP) integrates disparate task-relevant visual features by recording from LIP neurons in monkeys trained to identify target stimuli composed of conjunctions of color and motion features. We show that LIP neurons exhibit integrative representations of both color and motion features when they are task relevant, and task-dependent shifts of both direction and color tuning. This suggests that LIP plays a role in flexibly integrating task-relevant sensory signals. PMID:25199703

  5. The effects of practice with MP3 players on driving performance.

    PubMed

    Chisholm, S L; Caird, J K; Lockhart, J

    2008-03-01

    This study examined the effects of repeated iPod interactions on driver performance to determine if performance decrements decreased with practice. Nineteen younger drivers (mean age=19.4, range 18-22) participated in a seven session study in the University of Calgary Driving Simulator (UCDS). Drivers encountered a number of critical events on the roadways while interacting with an iPod including a pedestrian entering the roadway, a vehicle pullout, and a lead vehicle braking. Measures of hazard response, vehicle control, eye movements, and secondary task performance were analyzed. Increases in perception response time (PRT) and collisions were found while drivers were performing the difficult iPod tasks, which involved finding a specific song within the song titles menu. Over the course of the six experimental sessions, driving performance improved in all conditions. Difficult iPod interactions significantly increased the amount of visual attention directed into the vehicle above that of the baseline condition. With practice, slowed responses to driving hazards while interacting with the iPod declined somewhat, but a decrement still remained relative to the baseline condition. The multivariate results suggest that access to difficult iPod tasks while vehicles are in motion should be curtailed.

  6. Motion and Actions in Language: Semantic Representations in Occipito-Temporal Cortex

    ERIC Educational Resources Information Center

    Humphreys, Gina F.; Newling, Katherine; Jennings, Caroline; Gennari, Silvia P.

    2013-01-01

    Understanding verbs typically activates posterior temporal regions and, in some circumstances, motion perception area V5. However, the nature and role of this activation remains unclear: does language alone indeed activate V5? And are posterior temporal representations modality-specific motion representations, or supra-modal motion-independent…

  7. The neural encoding of self-generated and externally applied movement: implications for the perception of self-motion and spatial memory

    PubMed Central

    Cullen, Kathleen E.

    2014-01-01

    The vestibular system is vital for maintaining an accurate representation of self-motion. As one moves (or is moved) toward a new place in the environment, signals from the vestibular sensors are relayed to higher-order centers. It is generally assumed the vestibular system provides a veridical representation of head motion to these centers for the perception of self-motion and spatial memory. In support of this idea, evidence from lesion studies suggests that vestibular inputs are required for the directional tuning of head direction cells in the limbic system as well as neurons in areas of multimodal association cortex. However, recent investigations in monkeys and mice challenge the notion that early vestibular pathways encode an absolute representation of head motion. Instead, processing at the first central stage is inherently multimodal. This minireview highlights recent progress that has been made towards understanding how the brain processes and interprets self-motion signals encoded by the vestibular otoliths and semicircular canals during everyday life. The following interrelated questions are considered. What information is available to the higher-order centers that contribute to self-motion perception? How do we distinguish between our own self-generated movements and those of the external world? And lastly, what are the implications of differences in the processing of these active vs. passive movements for spatial memory? PMID:24454282

  8. Multimodal Perception and Multicriterion Control of Nested Systems. 1; Coordination of Postural Control and Vehicular Control

    NASA Technical Reports Server (NTRS)

    Riccio, Gary E.; McDonald, P. Vernon

    1998-01-01

    The purpose of this report is to identify the essential characteristics of goal-directed whole-body motion. The report is organized into three major sections (Sections 2, 3, and 4). Section 2 reviews general themes from ecological psychology and control-systems engineering that are relevant to the perception and control of whole-body motion. These themes provide an organizational framework for analyzing the complex and interrelated phenomena that are the defining characteristics of whole-body motion. Section 3 of this report applies the organization framework from the first section to the problem of perception and control of aircraft motion. This is a familiar problem in control-systems engineering and ecological psychology. Section 4 examines an essential but generally neglected aspect of vehicular control: coordination of postural control and vehicular control. To facilitate presentation of this new idea, postural control and its coordination with vehicular control are analyzed in terms of conceptual categories that are familiar in the analysis of vehicular control.

  9. Speech Perception Deficits in Mandarin-Speaking School-Aged Children with Poor Reading Comprehension

    PubMed Central

    Liu, Huei-Mei; Tsao, Feng-Ming

    2017-01-01

    Previous studies have shown that children learning alphabetic writing systems who have language impairment or dyslexia exhibit speech perception deficits. However, whether such deficits exist in children learning logographic writing systems who have poor reading comprehension remains uncertain. To further explore this issue, the present study examined speech perception deficits in Mandarin-speaking children with poor reading comprehension. Two self-designed tasks, consonant categorical perception task and lexical tone discrimination task were used to compare speech perception performance in children (n = 31, age range = 7;4–10;2) with poor reading comprehension and an age-matched typically developing group (n = 31, age range = 7;7–9;10). Results showed that the children with poor reading comprehension were less accurate in consonant and lexical tone discrimination tasks and perceived speech contrasts less categorically than the matched group. The correlations between speech perception skills (i.e., consonant and lexical tone discrimination sensitivities and slope of consonant identification curve) and individuals’ oral language and reading comprehension were stronger than the correlations between speech perception ability and word recognition ability. In conclusion, the results revealed that Mandarin-speaking children with poor reading comprehension exhibit less-categorized speech perception, suggesting that imprecise speech perception, especially lexical tone perception, is essential to account for reading learning difficulties in Mandarin-speaking children. PMID:29312031

  10. Recognizing biological motion and emotions from point-light displays in autism spectrum disorders.

    PubMed

    Nackaerts, Evelien; Wagemans, Johan; Helsen, Werner; Swinnen, Stephan P; Wenderoth, Nicole; Alaerts, Kaat

    2012-01-01

    One of the main characteristics of Autism Spectrum Disorder (ASD) are problems with social interaction and communication. Here, we explored ASD-related alterations in 'reading' body language of other humans. Accuracy and reaction times were assessed from two observational tasks involving the recognition of 'biological motion' and 'emotions' from point-light displays (PLDs). Eye movements were recorded during the completion of the tests. Results indicated that typically developed-participants were more accurate than ASD-subjects in recognizing biological motion or emotions from PLDs. No accuracy differences were revealed on two control-tasks (involving the indication of color-changes in the moving point-lights). Group differences in reaction times existed on all tasks, but effect sizes were higher for the biological and emotion recognition tasks. Biological motion recognition abilities were related to a person's ability to recognize emotions from PLDs. However, ASD-related atypicalities in emotion recognition could not entirely be attributed to more basic deficits in biological motion recognition, suggesting an additional ASD-specific deficit in recognizing the emotional dimension of the point light displays. Eye movements were assessed during the completion of tasks and results indicated that ASD-participants generally produced more saccades and shorter fixation-durations compared to the control-group. However, especially for emotion recognition, these altered eye movements were associated with reductions in task-performance.

  11. Cortical activity patterns predict speech discrimination ability

    PubMed Central

    Engineer, Crystal T; Perez, Claudia A; Chen, YeTing H; Carraway, Ryan S; Reed, Amanda C; Shetake, Jai A; Jakkamsetti, Vikram; Chang, Kevin Q; Kilgard, Michael P

    2010-01-01

    Neural activity in the cerebral cortex can explain many aspects of sensory perception. Extensive psychophysical and neurophysiological studies of visual motion and vibrotactile processing show that the firing rate of cortical neurons averaged across 50–500 ms is well correlated with discrimination ability. In this study, we tested the hypothesis that primary auditory cortex (A1) neurons use temporal precision on the order of 1–10 ms to represent speech sounds shifted into the rat hearing range. Neural discrimination was highly correlated with behavioral performance on 11 consonant-discrimination tasks when spike timing was preserved and was not correlated when spike timing was eliminated. This result suggests that spike timing contributes to the auditory cortex representation of consonant sounds. PMID:18425123

  12. Role of orientation reference selection in motion sickness

    NASA Technical Reports Server (NTRS)

    Peterka, Robert J.; Black, F. Owen

    1992-01-01

    The overall objective of this proposal is to understand the relationship between human orientation control and motion sickness susceptibility. Three areas related to orientation control will be investigated. These three areas are (1) reflexes associated with the control of eye movements and posture, (2) the perception of body rotation and position with respect to gravity, and (3) the strategies used to resolve sensory conflict situations which arise when different sensory systems provide orientation cues which are not consistent with one another or with previous experience. Of particular interest is the possibility that a subject may be able to ignore an inaccurate sensory modality in favor of one or more other sensory modalities which do provide accurate orientation reference information. We refer to this process as sensory selection. This proposal will attempt to quantify subjects' sensory selection abilities and determine if this ability confers some immunity to the development of motion sickness symptoms. Measurements of reflexes, motion perception, sensory selection abilities, and motion sickness susceptibility will concentrate on pitch and roll motions since these seem most relevant to the space motion sickness problem. Vestibulo-ocular (VOR) and oculomotor reflexes will be measured using a unique two-axis rotation device developed in our laboratory over the last seven years. Posture control reflexes will be measured using a movable posture platform capable of independently altering proprioceptive and visual orientation cues. Motion perception will be quantified using closed loop feedback technique developed by Zacharias and Young (Exp Brain Res, 1981). This technique requires a subject to null out motions induced by the experimenter while being exposed to various confounding sensory orientation cues. A subject's sensory selection abilities will be measured by the magnitude and timing of his reactions to changes in sensory environments. Motion sickness susceptibility will be measured by the time required to induce characteristic changes in the pattern of electrogastrogram recordings while exposed to various sensory environments during posture and motion perception tests. The results of this work are relevant to NASA's interest in understanding the etiology of space motion sickness. If any of the reflex, perceptual, or sensory selection abilities of subjects are found to correlate with motion sickness susceptibility, this work may be an important step in suggesting a method of predicting motion sickness susceptibility. If sensory selection can provide a means to avoid sensory conflict, then further work may lead to training programs which could enhance a subject's sensory selection ability and therefore minimize motion sickness susceptibility.

  13. Shared sensory estimates for human motion perception and pursuit eye movements.

    PubMed

    Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio; Osborne, Leslie C

    2015-06-03

    Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. Copyright © 2015 the authors 0270-6474/15/358515-16$15.00/0.

  14. Shared Sensory Estimates for Human Motion Perception and Pursuit Eye Movements

    PubMed Central

    Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio

    2015-01-01

    Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. PMID:26041919

  15. Motion coherence and direction discrimination in healthy aging.

    PubMed

    Pilz, Karin S; Miller, Louisa; Agnew, Hannah C

    2017-01-01

    Perceptual functions change with age, particularly motion perception. With regard to healthy aging, previous studies mostly measured motion coherence thresholds for coarse motion direction discrimination along cardinal axes of motion. Here, we investigated age-related changes in the ability to discriminate between small angular differences in motion directions, which allows for a more specific assessment of age-related decline and its underlying mechanisms. We first assessed older (>60 years) and younger (<30 years) participants' ability to discriminate coarse horizontal (left/right) and vertical (up/down) motion at 100% coherence and a stimulus duration of 400 ms. In a second step, we determined participants' motion coherence thresholds for vertical and horizontal coarse motion direction discrimination. In a third step, we used the individually determined motion coherence thresholds and tested fine motion direction discrimination for motion clockwise away from horizontal and vertical motion. Older adults performed as well as younger adults for discriminating motion away from vertical. Surprisingly, performance for discriminating motion away from horizontal was strongly decreased. Further analyses, however, showed a relationship between motion coherence thresholds for horizontal coarse motion direction discrimination and fine motion direction discrimination performance in older adults. In a control experiment, using motion coherence above threshold for all conditions, the difference in performance for horizontal and vertical fine motion direction discrimination for older adults disappeared. These results clearly contradict the notion of an overall age-related decline in motion perception, and, most importantly, highlight the importance of taking into account individual differences when assessing age-related changes in perceptual functions.

  16. Prosodic Perception Problems in Spanish Dyslexia

    ERIC Educational Resources Information Center

    Cuetos, Fernando; Martínez-García, Cristina; Suárez-Coalla, Paz

    2018-01-01

    The aim of this study was to investigate the prosody abilities on top of phonological and visual abilities in children with dyslexia in Spanish that can be considered a syllable-timed language. The performances on prosodic tasks (prosodic perception, rise-time perception), phonological tasks (phonological awareness, rapid naming, verbal working…

  17. Gravito-Inertial Force Resolution in Perception of Synchronized Tilt and Translation

    NASA Technical Reports Server (NTRS)

    Wood, Scott J.; Holly, Jan; Zhang, Guen-Lu

    2011-01-01

    Natural movements in the sagittal plane involve pitch tilt relative to gravity combined with translation motion. The Gravito-Inertial Force (GIF) resolution hypothesis states that the resultant force on the body is perceptually resolved into tilt and translation consistently with the laws of physics. The purpose of this study was to test this hypothesis for human perception during combined tilt and translation motion. EXPERIMENTAL METHODS: Twelve subjects provided verbal reports during 0.3 Hz motion in the dark with 4 types of tilt and/or translation motion: 1) pitch tilt about an interaural axis at +/-10deg or +/-20deg, 2) fore-aft translation with acceleration equivalent to +/-10deg or +/-20deg, 3) combined "in phase" tilt and translation motion resulting in acceleration equivalent to +/-20deg, and 4) "out of phase" tilt and translation motion that maintained the resultant gravito-inertial force aligned with the longitudinal body axis. The amplitude of perceived pitch tilt and translation at the head were obtained during separate trials. MODELING METHODS: Three-dimensional mathematical modeling was performed to test the GIF-resolution hypothesis using a dynamical model. The model encoded GIF-resolution using the standard vector equation, and used an internal model of motion parameters, including gravity. Differential equations conveyed time-varying predictions. The six motion profiles were tested, resulting in predicted perceived amplitude of tilt and translation for each. RESULTS: The modeling results exhibited the same pattern as the experimental results. Most importantly, both modeling and experimental results showed greater perceived tilt during the "in phase" profile than the "out of phase" profile, and greater perceived tilt during combined "in phase" motion than during pure tilt of the same amplitude. However, the model did not predict as much perceived translation as reported by subjects during pure tilt. CONCLUSION: Human perception is consistent with the GIF-resolution hypothesis even when the gravito-inertial force vector remains aligned with the body during periodic motion. Perception is also consistent with GIF-resolution in the opposite condition, when the gravito-inertial force vector angle is enhanced by synchronized tilt and translation.

  18. A selective impairment of perception of sound motion direction in peripheral space: A case study.

    PubMed

    Thaler, Lore; Paciocco, Joseph; Daley, Mark; Lesniak, Gabriella D; Purcell, David W; Fraser, J Alexander; Dutton, Gordon N; Rossit, Stephanie; Goodale, Melvyn A; Culham, Jody C

    2016-01-08

    It is still an open question if the auditory system, similar to the visual system, processes auditory motion independently from other aspects of spatial hearing, such as static location. Here, we report psychophysical data from a patient (female, 42 and 44 years old at the time of two testing sessions), who suffered a bilateral occipital infarction over 12 years earlier, and who has extensive damage in the occipital lobe bilaterally, extending into inferior posterior temporal cortex bilaterally and into right parietal cortex. We measured the patient's spatial hearing ability to discriminate static location, detect motion and perceive motion direction in both central (straight ahead), and right and left peripheral auditory space (50° to the left and right of straight ahead). Compared to control subjects, the patient was impaired in her perception of direction of auditory motion in peripheral auditory space, and the deficit was more pronounced on the right side. However, there was no impairment in her perception of the direction of auditory motion in central space. Furthermore, detection of motion and discrimination of static location were normal in both central and peripheral space. The patient also performed normally in a wide battery of non-spatial audiological tests. Our data are consistent with previous neuropsychological and neuroimaging results that link posterior temporal cortex and parietal cortex with the processing of auditory motion. Most importantly, however, our data break new ground by suggesting a division of auditory motion processing in terms of speed and direction and in terms of central and peripheral space. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Transfer of learning between unimanual and bimanual rhythmic movement coordination: transfer is a function of the task dynamic.

    PubMed

    Snapp-Childs, Winona; Wilson, Andrew D; Bingham, Geoffrey P

    2015-07-01

    Under certain conditions, learning can transfer from a trained task to an untrained version of that same task. However, it is as yet unclear what those certain conditions are or why learning transfers when it does. Coordinated rhythmic movement is a valuable model system for investigating transfer because we have a model of the underlying task dynamic that includes perceptual coupling between the limbs being coordinated. The model predicts that (1) coordinated rhythmic movements, both bimanual and unimanual, are organised with respect to relative motion information for relative phase in the coupling function, (2) unimanual is less stable than bimanual coordination because the coupling is unidirectional rather than bidirectional, and (3) learning a new coordination is primarily about learning to perceive and use the relevant information which, with equal perceptual improvement due to training, yields equal transfer of learning from bimanual to unimanual coordination and vice versa [but, given prediction (2), the resulting performance is also conditioned by the intrinsic stability of each task]. In the present study, two groups were trained to produce 90° either unimanually or bimanually, respectively, and tested in respect to learning (namely improved performance in the trained 90° coordination task and improved visual discrimination of 90°) and transfer of learning (to the other, untrained 90° coordination task). Both groups improved in the task condition in which they were trained and in their ability to visually discriminate 90°, and this learning transferred to the untrained condition. When scaled by the relative intrinsic stability of each task, transfer levels were found to be equal. The results are discussed in the context of the perception-action approach to learning and performance.

  20. Tolerability to prolonged lifting tasks. A validation of the recommended limits.

    PubMed

    Capodaglio, P; Bazzini, G

    1997-01-01

    Prolonged physical exertion is subjectively regulated by the perception of effort. This preliminary study was conducted to validate the use of subjective perceptions of effort in assessing objectively tolerable workloads for prolonged lifting tasks. Ten healthy male subjects tested their maximal lifting capacity (MLC) on a lift dynamometer (LidoLift, Loredan Biomed., West Sacramento, CA) and underwent incremental and 30-minute endurance lifting tests. Cardiorespiratory parameters were monitored with an oxygen uptake analyzer, mechanical parameters were calculated using a computerized dynamometer. Ratings of perceived exertion were given on Borg's 10-point scale. Physiological responses to repetitive lifting were matched with subjective perceptions. A single-variable statistical regression for power functions was performed to obtain the individual "iso-perception" curves as functions of the mechanical work exerted. We found that the "iso-perception" curve corresponding to a "moderate" perception of effort may represent the individual "tolerance threshold" for prolonged lifting tasks, since physiological responses at this level of intensity did not change significantly and the respiratory exchange ratio was less than one. The individually tolerable weight for lifting tasks lasting 30 min has been expressed as a percentage of the isoinertial MLC value and compared with the currently recommended limits for prolonged lifting tasks (Italian legislation D.L. 626/94). On the basis of our preliminary results a "tolerance threshold" of 20% MLC has been proposed for prolonged lifting tasks.

  1. MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.

    PubMed

    Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik

    2016-01-01

    Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.

  2. Distinguishing bias from sensitivity effects in multialternative detection tasks.

    PubMed

    Sridharan, Devarajan; Steinmetz, Nicholas A; Moore, Tirin; Knudsen, Eric I

    2014-08-21

    Studies investigating the neural bases of cognitive phenomena increasingly employ multialternative detection tasks that seek to measure the ability to detect a target stimulus or changes in some target feature (e.g., orientation or direction of motion) that could occur at one of many locations. In such tasks, it is essential to distinguish the behavioral and neural correlates of enhanced perceptual sensitivity from those of increased bias for a particular location or choice (choice bias). However, making such a distinction is not possible with established approaches. We present a new signal detection model that decouples the behavioral effects of choice bias from those of perceptual sensitivity in multialternative (change) detection tasks. By formulating the perceptual decision in a multidimensional decision space, our model quantifies the respective contributions of bias and sensitivity to multialternative behavioral choices. With a combination of analytical and numerical approaches, we demonstrate an optimal, one-to-one mapping between model parameters and choice probabilities even for tasks involving arbitrarily large numbers of alternatives. We validated the model with published data from two ternary choice experiments: a target-detection experiment and a length-discrimination experiment. The results of this validation provided novel insights into perceptual processes (sensory noise and competitive interactions) that can accurately and parsimoniously account for observers' behavior in each task. The model will find important application in identifying and interpreting the effects of behavioral manipulations (e.g., cueing attention) or neural perturbations (e.g., stimulation or inactivation) in a variety of multialternative tasks of perception, attention, and decision-making. © 2014 ARVO.

  3. Distinguishing bias from sensitivity effects in multialternative detection tasks

    PubMed Central

    Sridharan, Devarajan; Steinmetz, Nicholas A.; Moore, Tirin; Knudsen, Eric I.

    2014-01-01

    Studies investigating the neural bases of cognitive phenomena increasingly employ multialternative detection tasks that seek to measure the ability to detect a target stimulus or changes in some target feature (e.g., orientation or direction of motion) that could occur at one of many locations. In such tasks, it is essential to distinguish the behavioral and neural correlates of enhanced perceptual sensitivity from those of increased bias for a particular location or choice (choice bias). However, making such a distinction is not possible with established approaches. We present a new signal detection model that decouples the behavioral effects of choice bias from those of perceptual sensitivity in multialternative (change) detection tasks. By formulating the perceptual decision in a multidimensional decision space, our model quantifies the respective contributions of bias and sensitivity to multialternative behavioral choices. With a combination of analytical and numerical approaches, we demonstrate an optimal, one-to-one mapping between model parameters and choice probabilities even for tasks involving arbitrarily large numbers of alternatives. We validated the model with published data from two ternary choice experiments: a target-detection experiment and a length-discrimination experiment. The results of this validation provided novel insights into perceptual processes (sensory noise and competitive interactions) that can accurately and parsimoniously account for observers' behavior in each task. The model will find important application in identifying and interpreting the effects of behavioral manipulations (e.g., cueing attention) or neural perturbations (e.g., stimulation or inactivation) in a variety of multialternative tasks of perception, attention, and decision-making. PMID:25146574

  4. SU33. Load-Sensitive Impairment of Working Memory for Biological Motion in Schizophrenia

    PubMed Central

    Lee, Hannah; Kim, Jejoong

    2017-01-01

    Abstract Background: Patients with schizophrenia (SZ) exhibit various deficits that may affect social functioning. The impairments in perceptual processing (eg, motion perception) and in higher cognitive processing such as working memory (WM), as well as deficits in social cognition, need to be closely examined in order to understand dysfunctions in social environments. However, comprehensive research conducted to date that takes these aspects into consideration all together is limited. Biological motion (BM) is a unique motion stimulus containing rich social information. Therefore, BM is optimal for our aim in this study to scrutinize how the dysfunctions in the cognitive processing of aforementioned 3 aspects (motion perception, WM, and social cognition) would be manifested in SZ, in a single experimental design. Methods: In the present study, we used BM in a delayed-response task for measuring WM. Non-BM motion stimulus (pairwise-shuffled motion or PSM) and polygons were also used for comparisons. One of the 3 types of stimuli was presented in each trial. After 12-second delays, 2 probes were shown, and the participants were asked to indicate whether one of them was identical to the memory item or whether both were novel. The number of memory items was either one (low load) or 2 (high load). Results: Overall, SZ performed worse than healthy controls (CO) regardless of the type of stimuli and memory loads, which is consistent with previous WM research. Across the low and high load conditions, CO were more accurate in recognizing BM than PSM, indicating that BM may have a facilitating effect for the encoding process involved in WM. Interestingly, SZ had similar accuracy patterns to that of CO in the low load condition. The BM facilitation effect, however, disappeared in the high load condition, yielding a significant interaction among group, stimulus type, and memory loads. These results suggest that BM, as a socially relevant stimulus, can facilitate encoding and/or maintenance to benefit WM performance in CO and that his effect is also partially valid among SZ. That is, SZ seem to successfully process the meaning of the stimulus when memory load is low. However, the ability is vulnerable to the increase of cognitive load in SZ, implying the presence of inefficiencies in the connections between perceptual and cognitive processes and/or limits in the capacity to process social information. Conclusion: The present study suggests that the intricate interaction among perceptual, cognitive, and social processes needs to be considered to explain cognitive deficits related with social dysfunction in schizophrenia. This study was supported by National Research Foundation of Korea.

  5. Overlapping parietal activity in memory and perception: evidence for the attention to memory model.

    PubMed

    Cabeza, Roberto; Mazuz, Yonatan S; Stokes, Jared; Kragel, James E; Woldorff, Marty G; Ciaramelli, Elisa; Olson, Ingrid R; Moscovitch, Morris

    2011-11-01

    The specific role of different parietal regions to episodic retrieval is a topic of intense debate. According to the Attention to Memory (AtoM) model, dorsal parietal cortex (DPC) mediates top-down attention processes guided by retrieval goals, whereas ventral parietal cortex (VPC) mediates bottom-up attention processes captured by the retrieval output or the retrieval cue. This model also hypothesizes that the attentional functions of DPC and VPC are similar for memory and perception. To investigate this last hypothesis, we scanned participants with event-related fMRI whereas they performed memory and perception tasks, each comprising an orienting phase (top-down attention) and a detection phase (bottom-up attention). The study yielded two main findings. First, consistent with the AtoM model, orienting-related activity for memory and perception overlapped in DPC, whereas detection-related activity for memory and perception overlapped in VPC. The DPC overlap was greater in the left intraparietal sulcus, and the VPC overlap in the left TPJ. Around overlapping areas, there were differences in the spatial distribution of memory and perception activations, which were consistent with trends reported in the literature. Second, both DPC and VPC showed stronger connectivity with medial-temporal lobe during the memory task and with visual cortex during the perception task. These findings suggest that, during memory tasks, some parietal regions mediate similar attentional control processes to those involved in perception tasks (orienting in DPC vs. detection in VPC), although on different types of information (mnemonic vs. sensory).

  6. Students' Perceptions of Motivational Climate and Enjoyment in Finnish Physical Education: A Latent Profile Analysis.

    PubMed

    Jaakkola, Timo; Wang, C K John; Soini, Markus; Liukkonen, Jarmo

    2015-09-01

    The purpose of this study was to identify student clusters with homogenous profiles in perceptions of task- and ego-involving, autonomy, and social relatedness supporting motivational climate in school physical education. Additionally, we investigated whether different motivational climate groups differed in their enjoyment in PE. Participants of the study were 2 594 girls and 1 803 boys, aged 14-15 years. Students responded to questionnaires assessing their perception of motivational climate and enjoyment in physical education. Latent profile analyses produced a five-cluster solution labeled 1) 'low autonomy, relatedness, task, and moderate ego climate' group', 2) 'low autonomy, relatedness, and high task and ego climate, 3) 'moderate autonomy, relatedness, task and ego climate' group 4) 'high autonomy, relatedness, task, and moderate ego climate' group, and 5) 'high relatedness and task but moderate autonomy and ego climate' group. Analyses of variance showed that students in clusters 4 and 5 perceived the highest level of enjoyment whereas students in cluster 1 experienced the lowest level of enjoyment. The results showed that the students' perceptions of various motivational climates created differential levels of enjoyment in PE classes. Key pointsLatent profile analyses produced a five-cluster solution labeled 1) 'low autonomy, relatedness, task, and moderate ego climate' group', 2) 'low autonomy, relatedness, and high task and ego climate, 3) 'moderate autonomy, relatedness, task and ego climate' group 4) 'high autonomy, relatedness, task, and moderate ego climate' group, and 5) 'high relatedness and task but moderate autonomy and ego climate' group.Analyses of variance showed that clusters 4 and 5 perceived the highest level of enjoyment whereas cluster 1 experienced the lowest level of enjoyment. The results showed that the students' perceptions of motivational climate create differential levels of enjoyment in PE classes.

  7. Effects of production training and perception training on lexical tone perception--A behavioral and ERP study.

    PubMed

    Lu, Shuang; Wayland, Ratree; Kaan, Edith

    2015-10-22

    The present study recorded both behavioral data and event-related brain potentials to examine the effectiveness of a perception-only training and a perception-plus-production training procedure on the intentional and unintentional perception of lexical tone by native English listeners. In the behavioral task, both the perception-only and the perception-plus-production groups improved on the tone discrimination abilities after the training session. Moreover, the participants in both groups generalized the improvements gained through the trained stimuli to the untrained stimuli. In the ERP task, the Mismatch Negativity was smaller in the post-training task than in the pre-training task. However, the two training groups did not differ in tone processing at the intentional or unintentional level after training. These results suggest that the employment of the motor system does not specifically benefit the tone perceptual skills. Furthermore, the present study investigated whether some tone pairs are more easily confused than others by native English listeners, and whether the order of tone presentation influences non-native tone discrimination. In the behavioral task, Tone2-Tone1 (rising-level) and Tone2-Tone4 (rising-falling) were the most difficult tone pairs, while Tone1-Tone2 and Tone4-Tone2 were the easiest tone pairs, even though they involved the same tone contrasts respectively. In the ERP task, the native English listeners had good discrimination when Tone2 and Tone4 were embedded in strings of Tone1, while poor discrimination when Tone1 was inserted in the context of Tone2 or Tone4. These asymmetries in tone perception might be attributed to the interference of native intonation system and can be altered by training. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Open and closed cortico-subcortical loops: A neuro-computational account of access to consciousness in the distractor-induced blindness paradigm.

    PubMed

    Ebner, Christian; Schroll, Henning; Winther, Gesche; Niedeggen, Michael; Hamker, Fred H

    2015-09-01

    How the brain decides which information to process 'consciously' has been debated over for decades without a simple explanation at hand. While most experiments manipulate the perceptual energy of presented stimuli, the distractor-induced blindness task is a prototypical paradigm to investigate gating of information into consciousness without or with only minor visual manipulation. In this paradigm, subjects are asked to report intervals of coherent dot motion in a rapid serial visual presentation (RSVP) stream, whenever these are preceded by a particular color stimulus in a different RSVP stream. If distractors (i.e., intervals of coherent dot motion prior to the color stimulus) are shown, subjects' abilities to perceive and report intervals of target dot motion decrease, particularly with short delays between intervals of target color and target motion. We propose a biologically plausible neuro-computational model of how the brain controls access to consciousness to explain how distractor-induced blindness originates from information processing in the cortex and basal ganglia. The model suggests that conscious perception requires reverberation of activity in cortico-subcortical loops and that basal-ganglia pathways can either allow or inhibit this reverberation. In the distractor-induced blindness paradigm, inadequate distractor-induced response tendencies are suppressed by the inhibitory 'hyperdirect' pathway of the basal ganglia. If a target follows such a distractor closely, temporal aftereffects of distractor suppression prevent target identification. The model reproduces experimental data on how delays between target color and target motion affect the probability of target detection. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. An Extended Normalization Model of Attention Accounts for Feature-Based Attentional Enhancement of Both Response and Coherence Gain

    PubMed Central

    Krishna, B. Suresh; Treue, Stefan

    2016-01-01

    Paying attention to a sensory feature improves its perception and impairs that of others. Recent work has shown that a Normalization Model of Attention (NMoA) can account for a wide range of physiological findings and the influence of different attentional manipulations on visual performance. A key prediction of the NMoA is that attention to a visual feature like an orientation or a motion direction will increase the response of neurons preferring the attended feature (response gain) rather than increase the sensory input strength of the attended stimulus (input gain). This effect of feature-based attention on neuronal responses should translate to similar patterns of improvement in behavioral performance, with psychometric functions showing response gain rather than input gain when attention is directed to the task-relevant feature. In contrast, we report here that when human subjects are cued to attend to one of two motion directions in a transparent motion display, attentional effects manifest as a combination of input and response gain. Further, the impact on input gain is greater when attention is directed towards a narrow range of motion directions than when it is directed towards a broad range. These results are captured by an extended NMoA, which either includes a stimulus-independent attentional contribution to normalization or utilizes direction-tuned normalization. The proposed extensions are consistent with the feature-similarity gain model of attention and the attentional modulation in extrastriate area MT, where neuronal responses are enhanced and suppressed by attention to preferred and non-preferred motion directions respectively. PMID:27977679

  10. Effects of sport expertise on representational momentum during timing control.

    PubMed

    Nakamoto, Hiroki; Mori, Shiro; Ikudome, Sachi; Unenaka, Satoshi; Imanaka, Kuniyasu

    2015-04-01

    Sports involving fast visual perception require players to compensate for delays in neural processing of visual information. Memory for the final position of a moving object is distorted forward along its path of motion (i.e., "representational momentum," RM). This cognitive extrapolation of visual perception might compensate for the neural delay in interacting appropriately with a moving object. The present study examined whether experienced batters cognitively extrapolate the location of a fast-moving object and whether this extrapolation is associated with coincident timing control. Nine expert and nine novice baseball players performed a prediction motion task in which a target moved from one end of a straight 400-cm track at a constant velocity. In half of the trials, vision was suddenly occluded when the target reached the 200-cm point (occlusion condition). Participants had to press a button concurrently with the target arrival at the end of the track and verbally report their subjective assessment of the first target-occluded position. Experts showed larger RM magnitude (cognitive extrapolation) than did novices in the occlusion condition. RM magnitude and timing errors were strongly correlated in the fast velocity condition in both experts and novices, whereas in the slow velocity condition, a significant correlation appeared only in experts. This suggests that experts can cognitively extrapolate the location of a moving object according to their anticipation and, as a result, potentially circumvent neural processing delays. This process might be used to control response timing when interacting with moving objects.

  11. The role of perceptual load in inattentional blindness.

    PubMed

    Cartwright-Finch, Ula; Lavie, Nilli

    2007-03-01

    Perceptual load theory offers a resolution to the long-standing early vs. late selection debate over whether task-irrelevant stimuli are perceived, suggesting that irrelevant perception depends upon the perceptual load of task-relevant processing. However, previous evidence for this theory has relied on RTs and neuroimaging. Here we tested the effects of load on conscious perception using the "inattentional blindness" paradigm. As predicted by load theory, awareness of a task-irrelevant stimulus was significantly reduced by higher perceptual load (with increased numbers of search items, or a harder discrimination vs. detection task). These results demonstrate that conscious perception of task-irrelevant stimuli critically depends upon the level of task-relevant perceptual load rather than intentions or expectations, thus enhancing the resolution to the early vs. late selection debate offered by the perceptual load theory.

  12. The Effects of Phonological Short-Term Memory and Speech Perception on Spoken Sentence Comprehension in Children: Simulating Deficits in an Experimental Design.

    PubMed

    Higgins, Meaghan C; Penney, Sarah B; Robertson, Erin K

    2017-10-01

    The roles of phonological short-term memory (pSTM) and speech perception in spoken sentence comprehension were examined in an experimental design. Deficits in pSTM and speech perception were simulated through task demands while typically-developing children (N [Formula: see text] 71) completed a sentence-picture matching task. Children performed the control, simulated pSTM deficit, simulated speech perception deficit, or simulated double deficit condition. On long sentences, the double deficit group had lower scores than the control and speech perception deficit groups, and the pSTM deficit group had lower scores than the control group and marginally lower scores than the speech perception deficit group. The pSTM and speech perception groups performed similarly to groups with real deficits in these areas, who completed the control condition. Overall, scores were lowest on noncanonical long sentences. Results show pSTM has a greater effect than speech perception on sentence comprehension, at least in the tasks employed here.

  13. The effect of mild motion sickness and sopite syndrome on multitasking cognitive performance.

    PubMed

    Matsangas, Panagiotis; McCauley, Michael E; Becker, William

    2014-09-01

    In this study, we investigated the effects of mild motion sickness and sopite syndrome on multitasking cognitive performance. Despite knowledge on general motion sickness, little is known about the effect of motion sickness and sopite syndrome on multitasking cognitive performance. Specifically, there is a gap in existing knowledge in the gray area of mild motion sickness. Fifty-one healthy individuals performed a multitasking battery. Three independent groups of participants were exposed to two experimental sessions. Two groups received motion only in the first or the second session, whereas the control group did not receive motion. Measurements of motion sickness, sopite syndrome, alertness, and performance were collected during the experiment Only during the second session, motion sickness and sopite syndrome had a significant negative association with cognitive performance. Significant performance differences between symptomatic and asymptomatic participants in the second session were identified in composite (9.43%), memory (31.7%), and arithmetic (14.7%) task scores. The results suggest that performance retention between sessions was not affected by mild motion sickness. Multitasking cognitive performance declined even when motion sickness and soporific symptoms were mild. The results also show an order effect. We postulate that the differential effect of session on the association between symptomatology and multitasking performance may be related to the attentional resources allocated to performing the multiple tasks. Results suggest an inverse relationship between motion sickness effects on performance and the cognitive effort focused on performing a task. Even mild motion sickness has potential implications for multitasking operational performance.

  14. Parent and Adolescent Perceptions of Adolescent Career Development Tasks and Vocational Identity

    ERIC Educational Resources Information Center

    Rogers, Mary E.; Creed, Peter A.; Praskova, Anna

    2018-01-01

    We surveyed Australian adolescents and parents to test differences and congruence in perceptions of adolescent career development tasks (career planning, exploration, certainty, and world-of-work knowledge) and vocational identity. We found that, for adolescents (N = 415), career development tasks (not career exploration) explained 48% of the…

  15. How Does Lesson Structure Shape Teacher Perceptions of Teaching with Challenging Tasks?

    ERIC Educational Resources Information Center

    Russo, James; Hopkins, Sarah

    2017-01-01

    Despite reforms in mathematics education, many teachers remain reluctant to incorporate challenging (i.e., more cognitively demanding) tasks into their mathematics instruction. The current study examines how lesson structure shapes teacher perceptions of teaching with challenging tasks. Participants included three Year 1/2 classroom teachers who…

  16. Limited transfer of long-term motion perceptual learning with double training.

    PubMed

    Liang, Ju; Zhou, Yifeng; Fahle, Manfred; Liu, Zili

    2015-01-01

    A significant recent development in visual perceptual learning research is the double training technique. With this technique, Xiao, Zhang, Wang, Klein, Levi, and Yu (2008) have found complete transfer in tasks that had previously been shown to be stimulus specific. The significance of this finding is that this technique has since been successful in all tasks tested, including motion direction discrimination. Here, we investigated whether or not this technique could generalize to longer-term learning, using the method of constant stimuli. Our task was learning to discriminate motion directions of random dots. The second leg of training was contrast discrimination along a new average direction of the same moving dots. We found that, although exposure of moving dots along a new direction facilitated motion direction discrimination, this partial transfer was far from complete. We conclude that, although perceptual learning is transferrable under certain conditions, stimulus specificity also remains an inherent characteristic of motion perceptual learning.

  17. Effects of prolonged weightlessness on self-motion perception and eye movements evoked by roll and pitch

    NASA Technical Reports Server (NTRS)

    Reschke, Millard F.; Parker, Donald E.

    1987-01-01

    Seven astronauts reported translational self-motion during roll simulation 1-3 h after landing following 5-7 d of orbital flight. Two reported strong translational self-motion perception when they performed pitch head motions during entry and while the orbiter was stationary on the runway. One of two astronauts from whom adequate data were collected exhibited a 132-deg shift in the phase angle between roll stimulation and horizontal eye position 2 h after landing. Neither of two from whom adequate data were collected exhibited increased horizontal eye movement amplitude or disturbance of voluntary pitch or roll body motion immediately postflight. These results are generally consistent with an otolith tilt-translation reinterpretation model and are being applied to the development of apparatus and procedures intended to preadapt astronauts to the sensory rearrangement of weightlessness.

  18. Auditorily-induced illusory self-motion: a review.

    PubMed

    Väljamäe, Aleksander

    2009-10-01

    The aim of this paper is to provide a first review of studies related to auditorily-induced self-motion (vection). These studies have been scarce and scattered over the years and over several research communities including clinical audiology, multisensory perception of self-motion and its neural correlates, ergonomics, and virtual reality. The reviewed studies provide evidence that auditorily-induced vection has behavioral, physiological and neural correlates. Although the sound contribution to self-motion perception appears to be weaker than the visual modality, specific acoustic cues appear to be instrumental for a number of domains including posture prosthesis, navigation in unusual gravitoinertial environments (in the air, in space, or underwater), non-visual navigation, and multisensory integration during self-motion. A number of open research questions are highlighted opening avenue for more active and systematic studies in this area.

  19. Measuring pilot workload in a motion base simulator. III - Synchronous secondary task

    NASA Technical Reports Server (NTRS)

    Kantowitz, Barry H.; Bortolussi, Michael R.; Hart, Sandra G.

    1987-01-01

    This experiment continues earlier research of Kantowitz et al. (1983) conducted in a GAT-1 motion-base trainer to evaluate choice-reaction secondary tasks as measures of pilot work load. The earlier work used an asynchronous secondary task presented every 22 sec regardless of flying performance. The present experiment uses a synchronous task presented only when a critical event occurred on the flying task. Both two- and four-choice visual secondary tasks were investigated. Analysis of primary flying-task results showed no decrement in error for altitude, indicating that the key assumption necessary for using a choice secondary task was satisfied. Reaction times showed significant differences between 'easy' and 'hard' flight scenarios as well as the ability to discriminate among flight tasks.

  20. Perception of biological motion from size-invariant body representations.

    PubMed

    Lappe, Markus; Wittinghofer, Karin; de Lussanet, Marc H E

    2015-01-01

    The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion.

Top