Impairments in Tactile Search Following Superior Parietal Damage
ERIC Educational Resources Information Center
Skakoon-Sparling, Shayna P.; Vasquez, Brandon P.; Hano, Kate; Danckert, James
2011-01-01
The superior parietal cortex is critical for the control of visually guided actions. Research suggests that visual stimuli relevant to actions are preferentially processed when they are in peripersonal space. One recent study demonstrated that visually guided movements towards the body were more impaired in a patient with damage to superior…
Memory-guided saccade processing in visual form agnosia (patient DF).
Rossit, Stéphanie; Szymanek, Larissa; Butler, Stephen H; Harvey, Monika
2010-01-01
According to Milner and Goodale's model (The visual brain in action, Oxford University Press, Oxford, 2006) areas in the ventral visual stream mediate visual perception and oV-line actions, whilst regions in the dorsal visual stream mediate the on-line visual control of action. Strong evidence for this model comes from a patient (DF), who suffers from visual form agnosia after bilateral damage to the ventro-lateral occipital region, sparing V1. It has been reported that she is normal in immediate reaching and grasping, yet severely impaired when asked to perform delayed actions. Here we investigated whether this dissociation would extend to saccade execution. Neurophysiological studies and TMS work in humans have shown that the posterior parietal cortex (PPC), on the right in particular (supposedly spared in DF), is involved in the control of memory-guided saccades. Surprisingly though, we found that, just as reported for reaching and grasping, DF's saccadic accuracy was much reduced in the memory compared to the stimulus-guided condition. These data support the idea of a tight coupling of eye and hand movements and further suggest that dorsal stream structures may not be sufficient to drive memory-guided saccadic performance.
ERIC Educational Resources Information Center
Jax, Steven A.; Rosenbaum, David A.
2007-01-01
According to a prominent theory of human perception and performance (M. A. Goodale & A. D. Milner, 1992), the dorsal, action-related stream only controls visually guided actions in real time. Such a system would be predicted to show little or no action priming from previous experience. The 3 experiments reported here were designed to determine…
Visual-motor recalibration in geographical slant perception
NASA Technical Reports Server (NTRS)
Bhalla, M.; Proffitt, D. R.; Kaiser, M. K. (Principal Investigator)
1999-01-01
In 4 experiments, it was shown that hills appear steeper to people who are encumbered by wearing a heavy backpack (Experiment 1), are fatigued (Experiment 2), are of low physical fitness (Experiment 3), or are elderly and/or in declining health (Experiment 4). Visually guided actions are unaffected by these manipulations of physiological potential. Although dissociable, the awareness and action systems were also shown to be interconnected. Recalibration of the transformation relating awareness and actions was found to occur over long-term changes in physiological potential (fitness level, age, and health) but not with transitory changes (fatigue and load). Findings are discussed in terms of a time-dependent coordination between the separate systems that control explicit visual awareness and visually guided action.
A new neural framework for visuospatial processing.
Kravitz, Dwight J; Saleem, Kadharbatcha S; Baker, Chris I; Mishkin, Mortimer
2011-04-01
The division of cortical visual processing into distinct dorsal and ventral streams is a key framework that has guided visual neuroscience. The characterization of the ventral stream as a 'What' pathway is relatively uncontroversial, but the nature of dorsal stream processing is less clear. Originally proposed as mediating spatial perception ('Where'), more recent accounts suggest it primarily serves non-conscious visually guided action ('How'). Here, we identify three pathways emerging from the dorsal stream that consist of projections to the prefrontal and premotor cortices, and a major projection to the medial temporal lobe that courses both directly and indirectly through the posterior cingulate and retrosplenial cortices. These three pathways support both conscious and non-conscious visuospatial processing, including spatial working memory, visually guided action and navigation, respectively.
ERIC Educational Resources Information Center
Wiediger, Matthew D.; Fournier, Lisa R.
2008-01-01
Withholding an action plan in memory for later execution can delay execution of another action, if the actions share a similar (compatible) action feature (i.e., response hand). This phenomenon, termed compatibility interference (CI), was found for identity-based actions that do not require visual guidance. The authors examined whether CI can…
Skating down a steeper slope: Fear influences the perception of geographical slant
Stefanucci, Jeanine K.; Proffitt, Dennis R.; Clore, Gerald L.; Parekh, Nazish
2008-01-01
Conscious awareness of hill slant is overestimated, but visually guided actions directed at hills are relatively accurate. Also, steep hills are consciously estimated to be steeper from the top as opposed to the bottom, possibly because they are dangerous to walk down. In the present study, participants stood at the top of a hill on either a skateboard or a wooden box of the same height. They gave three estimates of the slant of the hill: a verbal report, a visually matched estimate, and a visually guided action. Fear of descending the hill was also assessed. Those participants that were scared (by standing on the skateboard) consciously judged the hill to be steeper relative to participants who were unafraid. However, the visually guided action measure was accurate across conditions. These results suggest that our explicit awareness of slant is influenced by the fear associated with a potentially dangerous action. “[The phobic] reported that as he drove towards bridges, they appeared to be sloping at a dangerous angle.” (Rachman and Cuk 1992 p. 583). PMID:18414594
A new neural framework for visuospatial processing
Kravitz, Dwight J.; Saleem, Kadharbatcha S.; Baker, Chris I.; Mishkin, Mortimer
2012-01-01
The division of cortical visual processing into distinct dorsal and ventral streams is a key framework that has guided visual neuroscience. The characterization of the ventral stream as a ‘What’ pathway is relatively uncontroversial, but the nature of dorsal stream processing is less clear. Originally proposed as mediating spatial perception (‘Where’), more recent accounts suggest it primarily serves non-conscious visually guided action (‘How’). Here, we identify three pathways emerging from the dorsal stream that consist of projections to the prefrontal and premotor cortices, and a major projection to the medial temporal lobe that courses both directly and indirectly through the posterior cingulate and retrosplenial cortices. These three pathways support both conscious and non-conscious visuospatial processing, including spatial working memory, visually guided action and navigation, respectively. PMID:21415848
Rise and fall of the two visual systems theory.
Rossetti, Yves; Pisella, Laure; McIntosh, Robert D
2017-06-01
Among the many dissociations describing the visual system, the dual theory of two visual systems, respectively dedicated to perception and action, has yielded a lot of support. There are psychophysical, anatomical and neuropsychological arguments in favor of this theory. Several behavioral studies that used sensory and motor psychophysical parameters observed differences between perceptive and motor responses. The anatomical network of the visual system in the non-human primate was very readily organized according to two major pathways, dorsal and ventral. Neuropsychological studies, exploring optic ataxia and visual agnosia as characteristic deficits of these two pathways, led to the proposal of a functional double dissociation between visuomotor and visual perceptual functions. After a major wave of popularity that promoted great advances, particularly in knowledge of visuomotor functions, the guiding theory is now being reconsidered. Firstly, the idea of a double dissociation between optic ataxia and visual form agnosia, as cleanly separating visuomotor from visual perceptual functions, is no longer tenable; optic ataxia does not support a dissociation between perception and action and might be more accurately viewed as a negative image of action blindsight. Secondly, dissociations between perceptive and motor responses highlighted in the framework of this theory concern a very elementary level of action, even automatically guided action routines. Thirdly, the very rich interconnected network of the visual brain yields few arguments in favor of a strict perception/action dissociation. Overall, the dissociation between motor function and perceptive function explored by these behavioral and neuropsychological studies can help define an automatic level of action organization deficient in optic ataxia and preserved in action blindsight, and underlines the renewed need to consider the perception-action circle as a functional ensemble. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
How (and why) the visual control of action differs from visual perception
Goodale, Melvyn A.
2014-01-01
Vision not only provides us with detailed knowledge of the world beyond our bodies, but it also guides our actions with respect to objects and events in that world. The computations required for vision-for-perception are quite different from those required for vision-for-action. The former uses relational metrics and scene-based frames of reference while the latter uses absolute metrics and effector-based frames of reference. These competing demands on vision have shaped the organization of the visual pathways in the primate brain, particularly within the visual areas of the cerebral cortex. The ventral ‘perceptual’ stream, projecting from early visual areas to inferior temporal cortex, helps to construct the rich and detailed visual representations of the world that allow us to identify objects and events, attach meaning and significance to them and establish their causal relations. By contrast, the dorsal ‘action’ stream, projecting from early visual areas to the posterior parietal cortex, plays a critical role in the real-time control of action, transforming information about the location and disposition of goal objects into the coordinate frames of the effectors being used to perform the action. The idea of two visual systems in a single brain might seem initially counterintuitive. Our visual experience of the world is so compelling that it is hard to believe that some other quite independent visual signal—one that we are unaware of—is guiding our movements. But evidence from a broad range of studies from neuropsychology to neuroimaging has shown that the visual signals that give us our experience of objects and events in the world are not the same ones that control our actions. PMID:24789899
Monaco, Simona; Gallivan, Jason P; Figley, Teresa D; Singhal, Anthony; Culham, Jody C
2017-11-29
The role of the early visual cortex and higher-order occipitotemporal cortex has been studied extensively for visual recognition and to a lesser degree for haptic recognition and visually guided actions. Using a slow event-related fMRI experiment, we investigated whether tactile and visual exploration of objects recruit the same "visual" areas (and in the case of visual cortex, the same retinotopic zones) and if these areas show reactivation during delayed actions in the dark toward haptically explored objects (and if so, whether this reactivation might be due to imagery). We examined activation during visual or haptic exploration of objects and action execution (grasping or reaching) separated by an 18 s delay. Twenty-nine human volunteers (13 females) participated in this study. Participants had their eyes open and fixated on a point in the dark. The objects were placed below the fixation point and accordingly visual exploration activated the cuneus, which processes retinotopic locations in the lower visual field. Strikingly, the occipital pole (OP), representing foveal locations, showed higher activation for tactile than visual exploration, although the stimulus was unseen and location in the visual field was peripheral. Moreover, the lateral occipital tactile-visual area (LOtv) showed comparable activation for tactile and visual exploration. Psychophysiological interaction analysis indicated that the OP showed stronger functional connectivity with anterior intraparietal sulcus and LOtv during the haptic than visual exploration of shapes in the dark. After the delay, the cuneus, OP, and LOtv showed reactivation that was independent of the sensory modality used to explore the object. These results show that haptic actions not only activate "visual" areas during object touch, but also that this information appears to be used in guiding grasping actions toward targets after a delay. SIGNIFICANCE STATEMENT Visual presentation of an object activates shape-processing areas and retinotopic locations in early visual areas. Moreover, if the object is grasped in the dark after a delay, these areas show "reactivation." Here, we show that these areas are also activated and reactivated for haptic object exploration and haptically guided grasping. Touch-related activity occurs not only in the retinotopic location of the visual stimulus, but also at the occipital pole (OP), corresponding to the foveal representation, even though the stimulus was unseen and located peripherally. That is, the same "visual" regions are implicated in both visual and haptic exploration; however, touch also recruits high-acuity central representation within early visual areas during both haptic exploration of objects and subsequent actions toward them. Functional connectivity analysis shows that the OP is more strongly connected with ventral and dorsal stream areas when participants explore an object in the dark than when they view it. Copyright © 2017 the authors 0270-6474/17/3711572-20$15.00/0.
Forum Guide to Data Visualization: A Resource for Education Agencies. NFES 2017-016
ERIC Educational Resources Information Center
National Forum on Education Statistics, 2016
2016-01-01
The purpose of this document is to recommend data visualization practices that will help education agencies communicate data meaning in visual formats that are accessible, accurate, and actionable for a wide range of education stakeholders. Although this resource is designed for staff in education agencies, many of the visualization principles…
Stimulation of the substantia nigra influences the specification of memory-guided saccades
Mahamed, Safraaz; Garrison, Tiffany J.; Shires, Joel
2013-01-01
In the absence of sensory information, we rely on past experience or memories to guide our actions. Because previous experimental and clinical reports implicate basal ganglia nuclei in the generation of movement in the absence of sensory stimuli, we ask here whether one output nucleus of the basal ganglia, the substantia nigra pars reticulata (nigra), influences the specification of an eye movement in the absence of sensory information to guide the movement. We manipulated the level of activity of neurons in the nigra by introducing electrical stimulation to the nigra at different time intervals while monkeys made saccades to different locations in two conditions: one in which the target location remained visible and a second in which the target location appeared only briefly, requiring information stored in memory to specify the movement. Electrical manipulation of the nigra occurring during the delay period of the task, when information about the target was maintained in memory, altered the direction and the occurrence of subsequent saccades. Stimulation during other intervals of the memory task or during the delay period of the visually guided saccade task had less effect on eye movements. On stimulated trials, and only when the visual stimulus was absent, monkeys occasionally (∼20% of the time) failed to make saccades. When monkeys made saccades in the absence of a visual stimulus, stimulation of the nigra resulted in a rotation of the endpoints ipsilaterally (∼2°) and increased the reaction time of contralaterally directed saccades. When the visual stimulus was present, stimulation of the nigra resulted in no significant rotation and decreased the reaction time of contralaterally directed saccades slightly. Based on these measurements, stimulation during the delay period of the memory-guided saccade task influenced the metrics of saccades much more than did stimulation during the same period of the visually guided saccade task. Because these effects occurred with manipulation of nigral activity well before the initiation of saccades and in trials in which the visual stimulus was absent, we conclude that information from the basal ganglia influences the specification of an action as it is evolving primarily during performance of memory-guided saccades. When visual information is available to guide the specification of the saccade, as occurs during visually guided saccades, basal ganglia information is less influential. PMID:24259551
Structural and functional changes across the visual cortex of a patient with visual form agnosia.
Bridge, Holly; Thomas, Owen M; Minini, Loredana; Cavina-Pratesi, Cristiana; Milner, A David; Parker, Andrew J
2013-07-31
Loss of shape recognition in visual-form agnosia occurs without equivalent losses in the use of vision to guide actions, providing support for the hypothesis of two visual systems (for "perception" and "action"). The human individual DF received a toxic exposure to carbon monoxide some years ago, which resulted in a persisting visual-form agnosia that has been extensively characterized at the behavioral level. We conducted a detailed high-resolution MRI study of DF's cortex, combining structural and functional measurements. We present the first accurate quantification of the changes in thickness across DF's occipital cortex, finding the most substantial loss in the lateral occipital cortex (LOC). There are reduced white matter connections between LOC and other areas. Functional measures show pockets of activity that survive within structurally damaged areas. The topographic mapping of visual areas showed that ordered retinotopic maps were evident for DF in the ventral portions of visual cortical areas V1, V2, V3, and hV4. Although V1 shows evidence of topographic order in its dorsal portion, such maps could not be found in the dorsal parts of V2 and V3. We conclude that it is not possible to understand fully the deficits in object perception in visual-form agnosia without the exploitation of both structural and functional measurements. Our results also highlight for DF the cortical routes through which visual information is able to pass to support her well-documented abilities to use visual information to guide actions.
Goal-directed action is automatically biased towards looming motion
Moher, Jeff; Sit, Jonathan; Song, Joo-Hyun
2014-01-01
It is known that looming motion can capture attention regardless of an observer’s intentions. Real-world behavior, however, frequently involves not just attentional selection, but selection for action. Thus, it is important to understand the impact of looming motion on goal-directed action to gain a broader perspective on how stimulus properties bias human behavior. We presented participants with a visually-guided reaching task in which they pointed to a target letter presented among non-target distractors. On some trials, one of the pre-masks at the location of the upcoming search objects grew rapidly in size, creating the appearance of a “looming” target or distractor. Even though looming motion did not predict the target location, the time required to reach to the target was shorter when the target loomed compared to when a distractor loomed. Furthermore, reach movement trajectories were pulled towards the location of a looming distractor when one was present, a pull that was greater still when the looming motion was on a collision path with the participant. We also contrast reaching data with data from a similarly designed visual search task requiring keypress responses. This comparison underscores the sensitivity of visually-guided reaching data, as some experimental manipulations, such as looming motion path, affected reach trajectories but not keypress measures. Together, the results demonstrate that looming motion biases visually-guided action regardless of an observer’s current behavioral goals, affecting not only the time required to reach to targets but also the path of the observer’s hand movement itself. PMID:25159287
NASA Technical Reports Server (NTRS)
Krauzlis, R. J.; Stone, L. S.
1999-01-01
The two components of voluntary tracking eye-movements in primates, pursuit and saccades, are generally viewed as relatively independent oculomotor subsystems that move the eyes in different ways using independent visual information. Although saccades have long been known to be guided by visual processes related to perception and cognition, only recently have psychophysical and physiological studies provided compelling evidence that pursuit is also guided by such higher-order visual processes, rather than by the raw retinal stimulus. Pursuit and saccades also do not appear to be entirely independent anatomical systems, but involve overlapping neural mechanisms that might be important for coordinating these two types of eye movement during the tracking of a selected visual object. Given that the recovery of objects from real-world images is inherently ambiguous, guiding both pursuit and saccades with perception could represent an explicit strategy for ensuring that these two motor actions are driven by a single visual interpretation.
An Exploratory Study of Interactivity in Visualization Tools: "Flow" of Interaction
ERIC Educational Resources Information Center
Liang, Hai-Ning; Parsons, Paul C.; Wu, Hsien-Chi; Sedig, Kamran
2010-01-01
This paper deals with the design of interactivity in visualization tools. There are several factors that can be used to guide the analysis and design of the interactivity of these tools. One such factor is flow, which is concerned with the duration of interaction with visual representations of information--interaction being the actions performed…
Action Control: Independent Effects of Memory and Monocular Viewing on Reaching Accuracy
ERIC Educational Resources Information Center
Westwood, D.A.; Robertson, C.; Heath, M.
2005-01-01
Evidence suggests that perceptual networks in the ventral visual pathway are necessary for action control when targets are viewed with only one eye, or when the target must be stored in memory. We tested whether memory-linked (i.e., open-loop versus memory-guided actions) and monocular-linked effects (i.e., binocular versus monocular actions) on…
Moving Stimuli Facilitate Synchronization But Not Temporal Perception
Silva, Susana; Castro, São Luís
2016-01-01
Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap. PMID:27909419
Moving Stimuli Facilitate Synchronization But Not Temporal Perception.
Silva, Susana; Castro, São Luís
2016-01-01
Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap.
Transient visual pathway critical for normal development of primate grasping behavior.
Mundinano, Inaki-Carril; Fox, Dylan M; Kwan, William C; Vidaurre, Diego; Teo, Leon; Homman-Ludiye, Jihane; Goodale, Melvyn A; Leopold, David A; Bourne, James A
2018-02-06
An evolutionary hallmark of anthropoid primates, including humans, is the use of vision to guide precise manual movements. These behaviors are reliant on a specialized visual input to the posterior parietal cortex. Here, we show that normal primate reaching-and-grasping behavior depends critically on a visual pathway through the thalamic pulvinar, which is thought to relay information to the middle temporal (MT) area during early life and then swiftly withdraws. Small MRI-guided lesions to a subdivision of the inferior pulvinar subnucleus (PIm) in the infant marmoset monkey led to permanent deficits in reaching-and-grasping behavior in the adult. This functional loss coincided with the abnormal anatomical development of multiple cortical areas responsible for the guidance of actions. Our study reveals that the transient retino-pulvinar-MT pathway underpins the development of visually guided manual behaviors in primates that are crucial for interacting with complex features in the environment.
Jolij, Jacob; Scholte, H Steven; van Gaal, Simon; Hodgson, Timothy L; Lamme, Victor A F
2011-12-01
Humans largely guide their behavior by their visual representation of the world. Recent studies have shown that visual information can trigger behavior within 150 msec, suggesting that visually guided responses to external events, in fact, precede conscious awareness of those events. However, is such a view correct? By using a texture discrimination task, we show that the brain relies on long-latency visual processing in order to guide perceptual decisions. Decreasing stimulus saliency leads to selective changes in long-latency visually evoked potential components reflecting scene segmentation. These latency changes are accompanied by almost equal changes in simple RTs and points of subjective simultaneity. Furthermore, we find a strong correlation between individual RTs and the latencies of scene segmentation related components in the visually evoked potentials, showing that the processes underlying these late brain potentials are critical in triggering a response. However, using the same texture stimuli in an antisaccade task, we found that reflexive, but erroneous, prosaccades, but not antisaccades, can be triggered by earlier visual processes. In other words: The brain can act quickly, but decides late. Differences between our study and earlier findings suggesting that action precedes conscious awareness can be explained by assuming that task demands determine whether a fast and unconscious, or a slower and conscious, representation is used to initiate a visually guided response.
Tcheang, Lili; Bülthoff, Heinrich H.; Burgess, Neil
2011-01-01
Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map. PMID:21199934
Prentiss, Emily K; Schneider, Colleen L; Williams, Zoë R; Sahin, Bogachan; Mahon, Bradford Z
2018-03-15
The division of labour between the dorsal and ventral visual pathways is well established. The ventral stream supports object identification, while the dorsal stream supports online processing of visual information in the service of visually guided actions. Here, we report a case of an individual with a right inferior quadrantanopia who exhibited accurate spontaneous rotation of his wrist when grasping a target object in his blind visual field. His accurate wrist orientation was observed despite the fact that he exhibited no sensitivity to the orientation of the handle in a perceptual matching task. These findings indicate that non-geniculostriate visual pathways process basic volumetric information relevant to grasping, and reinforce the observation that phenomenal awareness is not necessary for an object's volumetric properties to influence visuomotor performance.
Real-space and real-time dynamics of CRISPR-Cas9 visualized by high-speed atomic force microscopy.
Shibata, Mikihiro; Nishimasu, Hiroshi; Kodera, Noriyuki; Hirano, Seiichi; Ando, Toshio; Uchihashi, Takayuki; Nureki, Osamu
2017-11-10
The CRISPR-associated endonuclease Cas9 binds to a guide RNA and cleaves double-stranded DNA with a sequence complementary to the RNA guide. The Cas9-RNA system has been harnessed for numerous applications, such as genome editing. Here we use high-speed atomic force microscopy (HS-AFM) to visualize the real-space and real-time dynamics of CRISPR-Cas9 in action. HS-AFM movies indicate that, whereas apo-Cas9 adopts unexpected flexible conformations, Cas9-RNA forms a stable bilobed structure and interrogates target sites on the DNA by three-dimensional diffusion. These movies also provide real-time visualization of the Cas9-mediated DNA cleavage process. Notably, the Cas9 HNH nuclease domain fluctuates upon DNA binding, and subsequently adopts an active conformation, where the HNH active site is docked at the cleavage site in the target DNA. Collectively, our HS-AFM data extend our understanding of the action mechanism of CRISPR-Cas9.
Is Visually Guided Reaching in Early Infancy a Myth?
ERIC Educational Resources Information Center
Clifton, Rachel K.; And Others
1993-01-01
Seven infants were tested between the ages of 6 and 25 weeks to see how they would grasp objects presented in full light and glowing or sounding objects presented in total darkness. In all three conditions, the infants first grasped the objects at nearly the same time, suggesting that internal stimuli, not visual guidance, directed their actions.…
Wang, Quanxin; Sporns, Olaf; Burkhalter, Andreas
2012-01-01
Much of the information used for visual perception and visually guided actions is processed in complex networks of connections within the cortex. To understand how this works in the normal brain and to determine the impact of disease, mice are promising models. In primate visual cortex, information is processed in a dorsal stream specialized for visuospatial processing and guided action and a ventral stream for object recognition. Here, we traced the outputs of 10 visual areas and used quantitative graph analytic tools of modern network science to determine, from the projection strengths in 39 cortical targets, the community structure of the network. We found a high density of the cortical graph that exceeded that previously shown in monkey. Each source area showed a unique distribution of projection weights across its targets (i.e. connectivity profile) that was well-fit by a lognormal function. Importantly, the community structure was strongly dependent on the location of the source area: outputs from medial/anterior extrastriate areas were more strongly linked to parietal, motor and limbic cortex, whereas lateral extrastriate areas were preferentially connected to temporal and parahippocampal cortex. These two subnetworks resemble dorsal and ventral cortical streams in primates, demonstrating that the basic layout of cortical networks is conserved across species. PMID:22457489
Guiding attention aids the acquisition of anticipatory skill in novice soccer goalkeepers.
Ryu, Donghyun; Kim, Seonjin; Abernethy, Bruce; Mann, David L
2013-06-01
The ability to anticipate the actions of opponents can be enhanced through perceptual-skill training, though there is doubt regarding the most effective form of doing so. We sought to evaluate whether perceptual-skill learning would be enhanced when supplemented with guiding visual information. Twenty-eight participants without soccer-playing experience were assigned to a guided perceptual-training group (n = 9), an unguided perceptual-training group (n = 10), or a control group (n = 9). The guided perceptual-training group received half of their trials with color cueing that highlighted either the key kinematic changes in the kicker's action or the known visual search strategy of expert goalkeepers. The unguided perceptual-training group undertook an equal number of trials of practice, but all trials were without guidance. The control group undertook no training intervention. All participants completed an anticipation test immediately before and after the 7-day training intervention, as well as a 24-hr retention test. The guided perceptual-training group significantly improved their response accuracy for anticipating the direction of soccer penalty kicks from preintervention to postintervention, whereas no change in performance was evident at posttest for either the unguided perceptual-training group or the control group. The superior performance of the guided perceptual-training group was preserved in the retention test and was confirmed when relative changes in response time were controlled using a covariate analysis. Perceptual training supplemented with guiding information provides a level of improvement in perceptual anticipatory skill that is not seen without guidance.
Action Planning Mediates Guidance of Visual Attention from Working Memory.
Feldmann-Wüstefeld, Tobias; Schubö, Anna
2015-01-01
Visual search is impaired when a salient task-irrelevant stimulus is presented together with the target. Recent research has shown that this attentional capture effect is enhanced when the salient stimulus matches working memory (WM) content, arguing in favor of attention guidance from WM. Visual attention was also shown to be closely coupled with action planning. Preparing a movement renders action-relevant perceptual dimensions more salient and thus increases search efficiency for stimuli sharing that dimension. The present study aimed at revealing common underlying mechanisms for selective attention, WM, and action planning. Participants both prepared a specific movement (grasping or pointing) and memorized a color hue. Before the movement was executed towards an object of the memorized color, a visual search task (additional singleton) was performed. Results showed that distraction from target was more pronounced when the additional singleton had a memorized color. This WM-guided attention deployment was more pronounced when participants prepared a grasping movement. We argue that preparing a grasping movement mediates attention guidance from WM content by enhancing representations of memory content that matches the distractor shape (i.e., circles), thus encouraging attentional capture by circle distractors of the memorized color. We conclude that templates for visual search, action planning, and WM compete for resources and thus cause interferences.
Action Planning Mediates Guidance of Visual Attention from Working Memory
Schubö, Anna
2015-01-01
Visual search is impaired when a salient task-irrelevant stimulus is presented together with the target. Recent research has shown that this attentional capture effect is enhanced when the salient stimulus matches working memory (WM) content, arguing in favor of attention guidance from WM. Visual attention was also shown to be closely coupled with action planning. Preparing a movement renders action-relevant perceptual dimensions more salient and thus increases search efficiency for stimuli sharing that dimension. The present study aimed at revealing common underlying mechanisms for selective attention, WM, and action planning. Participants both prepared a specific movement (grasping or pointing) and memorized a color hue. Before the movement was executed towards an object of the memorized color, a visual search task (additional singleton) was performed. Results showed that distraction from target was more pronounced when the additional singleton had a memorized color. This WM-guided attention deployment was more pronounced when participants prepared a grasping movement. We argue that preparing a grasping movement mediates attention guidance from WM content by enhancing representations of memory content that matches the distractor shape (i.e., circles), thus encouraging attentional capture by circle distractors of the memorized color. We conclude that templates for visual search, action planning, and WM compete for resources and thus cause interferences. PMID:26171241
Gallivan, Jason P; Goodale, Melvyn A
2018-01-01
In 1992, Goodale and Milner proposed a division of labor in the visual pathways of the primate cerebral cortex. According to their account, the ventral pathway, which projects to occipitotemporal cortex, constructs our visual percepts, while the dorsal pathway, which projects to posterior parietal cortex, mediates the visual control of action. Although the framing of the two-visual-system hypothesis has not been without controversy, it is clear that vision for action and vision for perception have distinct computational requirements, and significant support for the proposed neuroanatomic division has continued to emerge over the last two decades from human neuropsychology, neuroimaging, behavioral psychophysics, and monkey neurophysiology. In this chapter, we review much of this evidence, with a particular focus on recent findings from human neuroimaging and monkey neurophysiology, demonstrating a specialized role for parietal cortex in visually guided behavior. But even though the available evidence suggests that dedicated circuits mediate action and perception, in order to produce adaptive goal-directed behavior there must be a close coupling and seamless integration of information processing across these two systems. We discuss such ventral-dorsal-stream interactions and argue that the two pathways play different, yet complementary, roles in the production of skilled behavior. Copyright © 2018 Elsevier B.V. All rights reserved.
Gorbet, Diana J; Sergio, Lauren E
2018-01-01
A history of action video game (AVG) playing is associated with improvements in several visuospatial and attention-related skills and these improvements may be transferable to unrelated tasks. These facts make video games a potential medium for skill-training and rehabilitation. However, examinations of the neural correlates underlying these observations are almost non-existent in the visuomotor system. Further, the vast majority of studies on the effects of a history of AVG play have been done using almost exclusively male participants. Therefore, to begin to fill these gaps in the literature, we present findings from two experiments. In the first, we use functional MRI to examine brain activity in experienced, female AVG players during visually-guided reaching. In the second, we examine the kinematics of visually-guided reaching in this population. Imaging data demonstrate that relative to women who do not play, AVG players have less motor-related preparatory activity in the cuneus, middle occipital gyrus, and cerebellum. This decrease is correlated with estimates of time spent playing. Further, these correlations are strongest during the performance of a visuomotor mapping that spatially dissociates eye and arm movements. However, further examinations of the full time-course of visuomotor-related activity in the AVG players revealed that the decreased activity during motor preparation likely results from a later onset of activity in AVG players, which occurs closer to beginning motor execution relative to the non-playing group. Further, the data presented here suggest that this later onset of preparatory activity represents greater neural efficiency that is associated with faster visually-guided responses.
Gorbet, Diana J.; Sergio, Lauren E.
2018-01-01
A history of action video game (AVG) playing is associated with improvements in several visuospatial and attention-related skills and these improvements may be transferable to unrelated tasks. These facts make video games a potential medium for skill-training and rehabilitation. However, examinations of the neural correlates underlying these observations are almost non-existent in the visuomotor system. Further, the vast majority of studies on the effects of a history of AVG play have been done using almost exclusively male participants. Therefore, to begin to fill these gaps in the literature, we present findings from two experiments. In the first, we use functional MRI to examine brain activity in experienced, female AVG players during visually-guided reaching. In the second, we examine the kinematics of visually-guided reaching in this population. Imaging data demonstrate that relative to women who do not play, AVG players have less motor-related preparatory activity in the cuneus, middle occipital gyrus, and cerebellum. This decrease is correlated with estimates of time spent playing. Further, these correlations are strongest during the performance of a visuomotor mapping that spatially dissociates eye and arm movements. However, further examinations of the full time-course of visuomotor-related activity in the AVG players revealed that the decreased activity during motor preparation likely results from a later onset of activity in AVG players, which occurs closer to beginning motor execution relative to the non-playing group. Further, the data presented here suggest that this later onset of preparatory activity represents greater neural efficiency that is associated with faster visually-guided responses. PMID:29364891
Foroud, Afra; Whishaw, Ian Q
2012-06-01
Reaching-to-eat (skilled reaching) is a natural behaviour that involves reaching for, grasping and withdrawing a target to be placed into the mouth for eating. It is an action performed daily by adults and is among the first complex behaviours to develop in infants. During development, visually guided reaching becomes increasingly refined to the point that grasping of small objects with precision grips of the digits occurs at about one year of age. Integration of the hand, upper-limbs, and whole body are required for successful reaching, but the ontogeny of this integration has not been described. The present longitudinal study used Laban Movement Analysis, a behavioural descriptive method, to investigate the developmental progression of the use and integration of axial, proximal, and distal movements performed during visually guided reaching. Four infants (from 7 to 40 weeks age) were presented with graspable objects (toys or food items). The first prereaching stage was associated with activation of mouth, limb, and hand movements to a visually presented target. Next, reaching attempts consisted of first, the advancement of the head with an opening mouth and then with the head, trunk and opening mouth. Eventually, the axial movements gave way to the refined action of one upper-limb supported by axial adjustments. These findings are discussed in relation to the biological objective of reaching, the evolutionary origins of reaching, and the decomposition of reaching after neurological injury. Copyright © 2012 Elsevier B.V. All rights reserved.
The Learning of Visually Guided Action: An Information-Space Analysis of Pole Balancing
ERIC Educational Resources Information Center
Jacobs, David M.; Vaz, Daniela V.; Michaels, Claire F.
2012-01-01
In cart-pole balancing, one moves a cart in 1 dimension so as to balance an attached inverted pendulum. We approached perception-action and learning in this task from an ecological perspective. This entailed identifying a space of informational variables that balancers use as they perform the task and demonstrating that they improve by traversing…
Memory-guided reaching in a patient with visual hemiagnosia.
Cornelsen, Sonja; Rennig, Johannes; Himmelbach, Marc
2016-06-01
The two-visual-systems hypothesis (TVSH) postulates that memory-guided movements rely on intact functions of the ventral stream. Its particular importance for memory-guided actions was initially inferred from behavioral dissociations in the well-known patient DF. Despite of rather accurate reaching and grasping movements to visible targets, she demonstrated grossly impaired memory-guided grasping as much as impaired memory-guided reaching. These dissociations were later complemented by apparently reversed dissociations in patients with dorsal damage and optic ataxia. However, grasping studies in DF and optic ataxia patients differed with respect to the retinotopic position of target objects, questioning the interpretation of the respective findings as a double dissociation. In contrast, the findings for reaching errors in both types of patients came from similar peripheral target presentations. However, new data on brain structural changes and visuomotor deficits in DF also questioned the validity of a double dissociation in reaching. A severe visuospatial short-term memory deficit in DF further questioned the specificity of her memory-guided reaching deficit. Therefore, we compared movement accuracy in visually-guided and memory-guided reaching in a new patient who suffered a confined unilateral damage to the ventral visual system due to stroke. Our results indeed support previous descriptions of memory-guided movements' inaccuracies in DF. Furthermore, our data suggest that recently discovered optic-ataxia like misreaching in DF is most likely caused by her parieto-occipital and not by her ventral stream damage. Finally, multiple visuospatial memory measurements in HWS suggest that inaccuracies in memory-guided reaching tasks in patients with ventral damage cannot be explained by visuospatial short-term memory or perceptual deficits, but by a specific deficit in visuomotor processing. Copyright © 2016 Elsevier Ltd. All rights reserved.
Selective weighting of action-related feature dimensions in visual working memory.
Heuer, Anna; Schubö, Anna
2017-08-01
Planning an action primes feature dimensions that are relevant for that particular action, increasing the impact of these dimensions on perceptual processing. Here, we investigated whether action planning also affects the short-term maintenance of visual information. In a combined memory and movement task, participants were to memorize items defined by size or color while preparing either a grasping or a pointing movement. Whereas size is a relevant feature dimension for grasping, color can be used to localize the goal object and guide a pointing movement. The results showed that memory for items defined by size was better during the preparation of a grasping movement than during the preparation of a pointing movement. Conversely, memory for color tended to be better when a pointing movement rather than a grasping movement was being planned. This pattern was not only observed when the memory task was embedded within the preparation period of the movement, but also when the movement to be performed was only indicated during the retention interval of the memory task. These findings reveal that a weighting of information in visual working memory according to action relevance can even be implemented at the representational level during maintenance, demonstrating that our actions continue to influence visual processing beyond the perceptual stage.
Freud, Erez; Macdonald, Scott N; Chen, Juan; Quinlan, Derek J; Goodale, Melvyn A; Culham, Jody C
2018-01-01
In the current era of touchscreen technology, humans commonly execute visually guided actions directed to two-dimensional (2D) images of objects. Although real, three-dimensional (3D), objects and images of the same objects share high degree of visual similarity, they differ fundamentally in the actions that can be performed on them. Indeed, previous behavioral studies have suggested that simulated grasping of images relies on different representations than actual grasping of real 3D objects. Yet the neural underpinnings of this phenomena have not been investigated. Here we used functional magnetic resonance imaging (fMRI) to investigate how brain activation patterns differed for grasping and reaching actions directed toward real 3D objects compared to images. Multivoxel Pattern Analysis (MVPA) revealed that the left anterior intraparietal sulcus (aIPS), a key region for visually guided grasping, discriminates between both the format in which objects were presented (real/image) and the motor task performed on them (grasping/reaching). Interestingly, during action planning, the representations of real 3D objects versus images differed more for grasping movements than reaching movements, likely because grasping real 3D objects involves fine-grained planning and anticipation of the consequences of a real interaction. Importantly, this dissociation was evident in the planning phase, before movement initiation, and was not found in any other regions, including motor and somatosensory cortices. This suggests that the dissociable representations in the left aIPS were not based on haptic, motor or proprioceptive feedback. Together, these findings provide novel evidence that actions, particularly grasping, are affected by the realness of the target objects during planning, perhaps because real targets require a more elaborate forward model based on visual cues to predict the consequences of real manipulation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Vision for perception and vision for action: normal and unusual development.
Dilks, Daniel D; Hoffman, James E; Landau, Barbara
2008-07-01
Evidence suggests that visual processing is divided into the dorsal ('how') and ventral ('what') streams. We examined the normal development of these streams and their breakdown under neurological deficit by comparing performance of normally developing children and Williams syndrome individuals on two tasks: a visually guided action ('how') task, in which participants posted a card into an oriented slot, and a perception ('what') task, in which they matched a card to the slot's orientation. Results showed that all groups performed worse on the action task than the perception task, but the disparity was more pronounced in WS individuals and in normal 3-4-year-olds than in older children. These findings suggest that the 'how' system may be relatively slow to develop and more vulnerable to breakdown than the 'what' system.
2012-06-11
places, resources, knowledge sets or other common Node Classes*. 285 This example will use the Stargate dataset (SG-1). This dataset is included...create a new Meta-Network. Below is the NodeSet for Stargate with the original 16 node NodeSet. 376 From the main menu select, Actions > Add...measures by simply gauging their size visually and intuitively. First, visualize one of your networks. Below is the Stargate agent x event network to
Parietal neurons encode expected gains in instrumental information
Foley, Nicholas C.; Kelly, Simon P.; Mhatre, Himanshu; Gottlieb, Jacqueline
2017-01-01
In natural behavior, animals have access to multiple sources of information, but only a few of these sources are relevant for learning and actions. Beyond choosing an appropriate action, making good decisions entails the ability to choose the relevant information, but fundamental questions remain about the brain’s information sampling policies. Recent studies described the neural correlates of seeking information about a reward, but it remains unknown whether, and how, neurons encode choices of instrumental information, in contexts in which the information guides subsequent actions. Here we show that parietal cortical neurons involved in oculomotor decisions encode, before an information sampling saccade, the reduction in uncertainty that the saccade is expected to bring for a subsequent action. These responses were distinct from the neurons’ visual and saccadic modulations and from signals of expected reward or reward prediction errors. Therefore, even in an instrumental context when information and reward gains are closely correlated, individual cells encode decision variables that are based on informational factors and can guide the active sampling of action-relevant cues. PMID:28373569
Brain systems for visual perspective taking and action perception.
Mazzarella, Elisabetta; Ramsey, Richard; Conson, Massimiliano; Hamilton, Antonia
2013-01-01
Taking another person's viewpoint and making sense of their actions are key processes that guide social behavior. Previous neuroimaging investigations have largely studied these processes separately. The current study used functional magnetic resonance imaging to examine how the brain incorporates another person's viewpoint and actions into visual perspective judgments. Participants made a left-right judgment about the location of a target object from their own (egocentric) or an actor's visual perspective (altercentric). Actor location varied around a table and the actor was either reaching or not reaching for the target object. Analyses examined brain regions engaged in the egocentric and altercentric tasks, brain regions where response magnitude tracked the orientation of the actor in the scene and brain regions sensitive to the action performed by the actor. The blood oxygen level-dependent (BOLD) response in dorsomedial prefrontal cortex (dmPFC) was sensitive to actor orientation in the altercentric task, whereas the response in right inferior frontal gyrus (IFG) was sensitive to actor orientation in the egocentric task. Thus, dmPFC and right IFG may play distinct but complementary roles in visual perspective taking (VPT). Observation of a reaching actor compared to a non-reaching actor yielded activation in lateral occipitotemporal cortex, regardless of task, showing that these regions are sensitive to body posture independent of social context. By considering how an observed actor's location and action influence the neural bases of visual perspective judgments, the current study supports the view that multiple neurocognitive "routes" operate during VPT.
Ambrosini, Ettore; Costantini, Marcello
2017-02-01
Viewed objects have been shown to afford suitable actions, even in the absence of any intention to act. However, little is known as to whether gaze behavior (i.e., the way we simply look at objects) is sensitive to action afforded by the seen object and how our actual motor possibilities affect this behavior. We recorded participants' eye movements during the observation of tools, graspable and ungraspable objects, while their hands were either freely resting on the table or tied behind their back. The effects of the observed object and hand posture on gaze behavior were measured by comparing the actual fixation distribution with that predicted by 2 widely supported models of visual attention, namely the Graph-Based Visual Saliency and the Adaptive Whitening Salience models. Results showed that saliency models did not accurately predict participants' fixation distributions for tools. Indeed, participants mostly fixated the action-related, functional part of the tools, regardless of its visual saliency. Critically, the restriction of the participants' action possibility led to a significant reduction of this effect and significantly improved the model prediction of the participants' gaze behavior. We suggest, first, that action-relevant object information at least in part guides gaze behavior. Second, postural information interacts with visual information to the generation of priority maps of fixation behavior. We support the view that the kind of information we access from the environment is constrained by our readiness to act. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Tracking without perceiving: a dissociation between eye movements and motion perception.
Spering, Miriam; Pomplun, Marc; Carrasco, Marisa
2011-02-01
Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept.
Tracking Without Perceiving: A Dissociation Between Eye Movements and Motion Perception
Spering, Miriam; Pomplun, Marc; Carrasco, Marisa
2011-01-01
Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept. PMID:21189353
Hayhoe, Mary M; Matthis, Jonathan Samir
2018-08-06
The development of better eye and body tracking systems, and more flexible virtual environments have allowed more systematic exploration of natural vision and contributed a number of insights. In natural visually guided behaviour, humans make continuous sequences of sensory-motor decisions to satisfy current goals, and the role of vision is to provide the relevant information in order to achieve those goals. This paper reviews the factors that control gaze in natural visually guided actions such as locomotion, including the rewards and costs associated with the immediate behavioural goals, uncertainty about the state of the world and prior knowledge of the environment. These general features of human gaze control may inform the development of artificial systems.
VisAdapt: A Visualization Tool to Support Climate Change Adaptation.
Johansson, Jimmy; Opach, Tomasz; Glaas, Erik; Neset, Tina-Simone; Navarra, Carlo; Linner, Bjorn-Ola; Rod, Jan Ketil
2017-01-01
The web-based visualization VisAdapt tool was developed to help laypeople in the Nordic countries assess how anticipated climate change will impact their homes. The tool guides users through a three-step visual process that helps them explore risks and identify adaptive actions specifically modified to their location and house type. This article walks through the tool's multistep, user-centered design process. Although VisAdapt's target end users are Nordic homeowners, the insights gained from the development process and the lessons learned from the project are applicable to a wide range of domains.
Playing shooter and driving videogames improves top-down guidance in visual search.
Wu, Sijing; Spence, Ian
2013-05-01
Playing action videogames is known to improve visual spatial attention and related skills. Here, we showed that playing action videogames also improves classic visual search, as well as the ability to locate targets in a dual search that mimics certain aspects of an action videogame. In Experiment 1A, first-person shooter (FPS) videogame players were faster than nonplayers in both feature search and conjunction search, and in Experiment 1B, they were faster and more accurate in a peripheral search and identification task while simultaneously performing a central search. In Experiment 2, we showed that 10 h of play could improve the performance of nonplayers on each of these tasks. Three different genres of videogames were used for training: two action games and a 3-D puzzle game. Participants who played an action game (either an FPS or a driving game) achieved greater gains on all search tasks than did those who trained using the puzzle game. Feature searches were faster after playing an action videogame, suggesting that players developed a better target template to guide search in a top-down manner. The results of the dual search suggest that, in addition to enhancing the ability to divide attention, playing an action game improves the top-down guidance of attention to possible target locations. The results have practical implications for the development of training tools to improve perceptual and cognitive skills.
Müller-Lyer figures influence the online reorganization of visually guided grasping movements.
Heath, Matthew; Rival, Christina; Neely, Kristina; Krigolson, Olav
2006-03-01
In advance of grasping a visual object embedded within fins-in and fins-out Müller-Lyer (ML) configurations, participants formulated a premovement grip aperture (GA) based on the size of a neutral preview object. Preview objects were smaller, veridical, or larger than the size of the to-be-grasped target object. As a result, premovement GA associated with the small and large preview objects required significant online reorganization to appropriately grasp the target object. We reasoned that such a manipulation would provide an opportunity to examine the extent to which the visuomotor system engages egocentric and/or allocentric visual cues for the online, feedback-based control of action. It was found that the online reorganization of GA was reliably influenced by the ML figures (i.e., from 20 to 80% of movement time), regardless of the size of the preview object, albeit the small and large preview objects elicited more robust illusory effects than the veridical preview object. These results counter the view that online grasping control is mediated by absolute visual information computed with respect to the observer (e.g., Glover in Behav Brain Sci 27:3-78, 2004; Milner and Goodale in The visual brain in action 1995). Instead, the impact of the ML figures suggests a level of interaction between egocentric and allocentric visual cues in online action control.
Children's Use of Allocentric Cues in Visually- and Memory-Guided Reach Space
ERIC Educational Resources Information Center
Cordova, Alberto; Gabbard, Carl
2012-01-01
Theory suggests that the vision-for-perception and vision-for-action processing streams operate under very different temporal constraints (Glover, 2004; Goodale, Jackobson, & Keillor, 1994; Graham, Bradshaw, & Davis, 1998; Hu, Eagleson, & Goodale, 1999). With the present study, children and young adults were asked to estimate how far a cued target…
The influence of visual motion on interceptive actions and perception.
Marinovic, Welber; Plooy, Annaliese M; Arnold, Derek H
2012-05-01
Visual information is an essential guide when interacting with moving objects, yet it can also be deceiving. For instance, motion can induce illusory position shifts, such that a moving ball can seem to have bounced past its true point of contact with the ground. Some evidence suggests illusory motion-induced position shifts bias pointing tasks to a greater extent than they do perceptual judgments. This, however, appears at odds with other findings and with our success when intercepting moving objects. Here we examined the accuracy of interceptive movements and of perceptual judgments in relation to simulated bounces. Participants were asked to intercept a moving disc at its bounce location by positioning a virtual paddle, and then to report where the disc had landed. Results showed that interceptive actions were accurate whereas perceptual judgments were inaccurate, biased in the direction of motion. Successful interceptions necessitated accurate information concerning both the location and timing of the bounce, so motor planning evidently had privileged access to an accurate forward model of bounce timing and location. This would explain why people can be accurate when intercepting a moving object, but lack insight into the accurate information that had guided their actions when asked to make a perceptual judgment. Copyright © 2012 Elsevier Ltd. All rights reserved.
Perceptual deficits of object identification: apperceptive agnosia.
Milner, A David; Cavina-Pratesi, Cristiana
2018-01-01
It is argued here that apperceptive object agnosia (generally now known as visual form agnosia) is in reality not a kind of agnosia, but rather a form of "imperception" (to use the term coined by Hughlings Jackson). We further argue that its proximate cause is a bilateral loss (or functional loss) of the visual form processing systems embodied in the human lateral occipital cortex (area LO). According to the dual-system model of cortical visual processing elaborated by Milner and Goodale (2006), area LO constitutes a crucial component of the ventral stream, and indeed is essential for providing the figural qualities inherent in our normal visual perception of the world. According to this account, the functional loss of area LO would leave only spared visual areas within the occipito-parietal dorsal stream - dedicated to the control of visually-guided actions - potentially able to provide some aspects of visual shape processing in patients with apperceptive agnosia. We review the relevant evidence from such individuals, concentrating particularly on the well-researched patient D.F. We conclude that studies of this kind can provide useful pointers to an understanding of the processing characteristics of parietal-lobe visual mechanisms and their interactions with occipitotemporal perceptual systems in the guidance of action. Copyright © 2018 Elsevier B.V. All rights reserved.
Rosen, Maya L; Stern, Chantal E; Michalka, Samantha W; Devaney, Kathryn J; Somers, David C
2015-08-12
Human parietal cortex plays a central role in encoding visuospatial information and multiple visual maps exist within the intraparietal sulcus (IPS), with each hemisphere symmetrically representing contralateral visual space. Two forms of hemispheric asymmetries have been identified in parietal cortex ventrolateral to visuotopic IPS. Key attentional processes are localized to right lateral parietal cortex in the temporoparietal junction and long-term memory (LTM) retrieval processes are localized to the left lateral parietal cortex in the angular gyrus. Here, using fMRI, we investigate how spatial representations of visuotopic IPS are influenced by stimulus-guided visuospatial attention and by LTM-guided visuospatial attention. We replicate prior findings that a hemispheric asymmetry emerges under stimulus-guided attention: in the right hemisphere (RH), visual maps IPS0, IPS1, and IPS2 code attentional targets across the visual field; in the left hemisphere (LH), IPS0-2 codes primarily contralateral targets. We report the novel finding that, under LTM-guided attention, both RH and LH IPS0-2 exhibit bilateral responses and hemispheric symmetry re-emerges. Therefore, we demonstrate that both hemispheres of IPS0-2 are independently capable of dynamically changing spatial coding properties as attentional task demands change. These findings have important implications for understanding visuospatial and memory-retrieval deficits in patients with parietal lobe damage. The human parietal lobe contains multiple maps of the external world that spatially guide perception, action, and cognition. Maps in each cerebral hemisphere code information from the opposite side of space, not from the same side, and the two hemispheres are symmetric. Paradoxically, damage to specific parietal regions that lack spatial maps can cause patients to ignore half of space (hemispatial neglect syndrome), but only for right (not left) hemisphere damage. Conversely, the left parietal cortex has been linked to retrieval of vivid memories regardless of space. Here, we investigate possible underlying mechanisms in healthy individuals. We demonstrate two forms of dynamic changes in parietal spatial representations: an asymmetric one for stimulus-guided attention and a symmetric one for long-term memory-guided attention. Copyright © 2015 the authors 0270-6474/15/3511358-06$15.00/0.
Spatial Alignment and Response Hand in Geometric and Motion Illusions
Scocchia, Lisa; Paroli, Michela; Stucchi, Natale A.; Sedda, Anna
2017-01-01
Perception of visual illusions is susceptible to manipulation of their spatial properties. Further, illusions can sometimes affect visually guided actions, especially the movement planning phase. Remarkably, visual properties of objects related to actions, such as affordances, can prime more accurate perceptual judgements. In spite of the amount of knowledge available on affordances and on the influence of illusions on actions (or lack of thereof), virtually nothing is known about the reverse: the influence of action-related parameters on the perception of visual illusions. Here, we tested a hypothesis that the response mode (that can be linked to action-relevant features) can affect perception of the Poggendorff (geometric) and of the Vanishing Point (motion) illusion. We explored the role of hand dominance (right dominant versus left non-dominant hand) and its interaction with stimulus spatial alignment (i.e., congruency between visual stimulus and the hand used for responses). Seventeen right-handed participants performed our tasks with their right and left hands, and the stimuli were presented in regular and mirror-reversed views. It turned out that the regular version of the Poggendorff display generates a stronger illusion compared to the mirror version, and that participants are less accurate and show more variability when they use their left hand in responding to the Vanishing Point. In summary, our results show that there is a marginal effect of hand precision in motion related illusions, which is absent for geometrical illusions. In the latter, attentional anisometry seems to play a greater role in generating the illusory effect. Taken together, our findings suggest that changes in the response mode (here: manual action-related parameters) do not necessarily affect illusion perception. Therefore, although intuitively speaking there should be at least unidirectional effects of perception on action, and possible interactions between the two systems, this simple study still suggests their relative independence, except for the case when the less skilled (non-dominant) hand and arguably more deliberate responses are used. PMID:28769830
Goard, Michael J; Pho, Gerald N; Woodson, Jonathan; Sur, Mriganka
2016-08-04
Mapping specific sensory features to future motor actions is a crucial capability of mammalian nervous systems. We investigated the role of visual (V1), posterior parietal (PPC), and frontal motor (fMC) cortices for sensorimotor mapping in mice during performance of a memory-guided visual discrimination task. Large-scale calcium imaging revealed that V1, PPC, and fMC neurons exhibited heterogeneous responses spanning all task epochs (stimulus, delay, response). Population analyses demonstrated unique encoding of stimulus identity and behavioral choice information across regions, with V1 encoding stimulus, fMC encoding choice even early in the trial, and PPC multiplexing the two variables. Optogenetic inhibition during behavior revealed that all regions were necessary during the stimulus epoch, but only fMC was required during the delay and response epochs. Stimulus identity can thus be rapidly transformed into behavioral choice, requiring V1, PPC, and fMC during the transformation period, but only fMC for maintaining the choice in memory prior to execution.
Amicuzi, Ileana; Stortini, Massimo; Petrarca, Maurizio; Di Giulio, Paola; Di Rosa, Giuseppe; Fariello, Giuseppe; Longo, Daniela; Cannatà, Vittorio; Genovese, Elisabetta; Castelli, Enrico
2006-10-01
We report the case of a 4.6-year-old girl born pre-term with early bilateral occipital damage. It was revealed that the child had non-severely impaired basic visual abilities and ocular motility, a selective perceptual deficit of figure-ground segregation, impaired visual recognition and abnormal navigating through space. Even if the child's visual functioning was not optimal, this was the expression of adaptive anatomic and functional brain modifications that occurred following the early lesion. Anatomic brain structure was studied with anatomic MRI and Diffusor Tensor Imaging (DTI)-MRI. This behavioral study may provide an important contribution to understanding the impact of an early lesion of the visual system on the development of visual functions and on the immature brain's potential for reorganisation related to when the damage occurred.
Asymmetric Attention: Visualizing the Uncertain Threat
2010-03-01
memory . This is supportive of earlier research by Engle (2002) suggesting that executive attention and working memory capacity are...explored by Engle (2002). Engle’s findings suggest that attention or the executive function and working memory actually entail the same mental process ...recognition, and action. These skills orient and guide the Soldier in operational settings from the basic perceptual process at the attentiveness stage
What puts the how in where? Tool use and the divided visual streams hypothesis.
Frey, Scott H
2007-04-01
An influential theory suggests that the dorsal (occipito-parietal) visual stream computes representations of objects for purposes of guiding actions (determining 'how') independently of ventral (occipito-temporal) stream processes supporting object recognition and semantic processing (determining 'what'). Yet, the ability of the dorsal stream alone to account for one of the most common forms of human action, tool use, is limited. While experience-dependent modifications to existing dorsal stream representations may explain simple tool use behaviors (e.g., using sticks to extend reach) found among a variety of species, skillful use of manipulable artifacts (e.g., cups, hammers, pencils) requires in addition access to semantic representations of objects' functions and uses. Functional neuroimaging suggests that this latter information is represented in a left-lateralized network of temporal, frontal and parietal areas. I submit that the well-established dominance of the human left hemisphere in the representation of familiar skills stems from the ability for this acquired knowledge to influence the organization of actions within the dorsal pathway.
Rapid steroid influences on visually guided sexual behavior in male goldfish
Lord, Louis-David; Bond, Julia; Thompson, Richmond R.
2013-01-01
The ability of steroid hormones to rapidly influence cell physiology through nongenomic mechanisms raises the possibility that these molecules may play a role in the dynamic regulation of social behavior, particularly in species in which social stimuli can rapidly influence circulating steroid levels. We therefore tested if testosterone (T), which increases in male goldfish in response to sexual stimuli, can rapidly influence approach responses towards females. Injections of T stimulated approach responses towards the visual cues of females 30–45 min after the injection but did not stimulate approach responses towards stimulus males or affect general activity, indicating that the effect is stimulus-specific and not a secondary consequence of increased arousal. Estradiol produced the same effect 30–45 min and even 10–25 min after administration, and treatment with the aromatase inhibitor fadrozole blocked exogenous T’s behavioral effect, indicating that T’s rapid stimulation of visual approach responses depends on aromatization. We suggest that T surges induced by sexual stimuli, including preovulatory pheromones, rapidly prime males to mate by increasing sensitivity within visual pathways that guide approach responses towards females and/or by increasing the motivation to approach potential mates through actions within traditional limbic circuits. PMID:19751737
Rapid steroid influences on visually guided sexual behavior in male goldfish.
Lord, Louis-David; Bond, Julia; Thompson, Richmond R
2009-11-01
The ability of steroid hormones to rapidly influence cell physiology through nongenomic mechanisms raises the possibility that these molecules may play a role in the dynamic regulation of social behavior, particularly in species in which social stimuli can rapidly influence circulating steroid levels. We therefore tested if testosterone (T), which increases in male goldfish in response to sexual stimuli, can rapidly influence approach responses towards females. Injections of T stimulated approach responses towards the visual cues of females 30-45 min after the injection but did not stimulate approach responses towards stimulus males or affect general activity, indicating that the effect is stimulus-specific and not a secondary consequence of increased arousal. Estradiol produced the same effect 30-45 min and even 10-25 min after administration, and treatment with the aromatase inhibitor fadrozole blocked exogenous T's behavioral effect, indicating that T's rapid stimulation of visual approach responses depends on aromatization. We suggest that T surges induced by sexual stimuli, including preovulatory pheromones, rapidly prime males to mate by increasing sensitivity within visual pathways that guide approach responses towards females and/or by increasing the motivation to approach potential mates through actions within traditional limbic circuits.
Thaler, Lore; Todd, James T
2009-04-01
Two experiments are reported that were designed to measure the accuracy and reliability of both visually guided hand movements (Exp. 1) and perceptual matching judgments (Exp. 2). The specific procedure for informing subjects of the required response on each trial was manipulated so that some tasks could only be performed using an allocentric representation of the visual target; others could be performed using either an allocentric or hand-centered representation; still others could be performed based on an allocentric, hand-centered or head/eye-centered representation. Both head/eye and hand centered representations are egocentric because they specify visual coordinates with respect to the subject. The results reveal that accuracy and reliability of both motor and perceptual responses are highest when subjects direct their response towards a visible target location, which allows them to rely on a representation of the target in head/eye-centered coordinates. Systematic changes in averages and standard deviations of responses are observed when subjects cannot direct their response towards a visible target location, but have to represent target distance and direction in either hand-centered or allocentric visual coordinates instead. Subjects' motor and perceptual performance agree quantitatively well. These results strongly suggest that subjects process head/eye-centered representations differently from hand-centered or allocentric representations, but that they process visual information for motor actions and perceptual judgments together.
Logic Models: A Tool for Effective Program Planning, Collaboration, and Monitoring. REL 2014-025
ERIC Educational Resources Information Center
Kekahio, Wendy; Lawton, Brian; Cicchinelli, Louis; Brandon, Paul R.
2014-01-01
A logic model is a visual representation of the assumptions and theory of action that underlie the structure of an education program. A program can be a strategy for instruction in a classroom, a training session for a group of teachers, a grade-level curriculum, a building-level intervention, or a district-or statewide initiative. This guide, an…
Ergonomics and accessibility for people with visual impairment in hotels.
Dos Santos, Larissa Nascimento; de Carvalho, Ricardo José Matos
2012-01-01
This article presents a diagnosis of luxury or superior hotels in the city of Natal, located in the state of Rio Grande do Norte, in northeastern Brazil, in what concerns accessibility to the visually impaired. The main objective is to present the guiding principles to design actions and interventions that must be considered in the preparation or revision of technical standards and manuals of good practice in accessibility related to people with visual impairments who are hotel users. The survey showed that the hotels do not meet the normative indications of accessibility, their facilities are in-accessible (have prevented access) or of reduced accessibility and its employees are not prepared to provide adequate hospital services for people with visual impairment. It was concluded that some of the accessibility problems faced by people with visual impairments are also faced by people in general.
Goard, Michael J; Pho, Gerald N; Woodson, Jonathan; Sur, Mriganka
2016-01-01
Mapping specific sensory features to future motor actions is a crucial capability of mammalian nervous systems. We investigated the role of visual (V1), posterior parietal (PPC), and frontal motor (fMC) cortices for sensorimotor mapping in mice during performance of a memory-guided visual discrimination task. Large-scale calcium imaging revealed that V1, PPC, and fMC neurons exhibited heterogeneous responses spanning all task epochs (stimulus, delay, response). Population analyses demonstrated unique encoding of stimulus identity and behavioral choice information across regions, with V1 encoding stimulus, fMC encoding choice even early in the trial, and PPC multiplexing the two variables. Optogenetic inhibition during behavior revealed that all regions were necessary during the stimulus epoch, but only fMC was required during the delay and response epochs. Stimulus identity can thus be rapidly transformed into behavioral choice, requiring V1, PPC, and fMC during the transformation period, but only fMC for maintaining the choice in memory prior to execution. DOI: http://dx.doi.org/10.7554/eLife.13764.001 PMID:27490481
'What' Is Happening in the Dorsal Visual Pathway.
Freud, Erez; Plaut, David C; Behrmann, Marlene
2016-10-01
The cortical visual system is almost universally thought to be segregated into two anatomically and functionally distinct pathways: a ventral occipitotemporal pathway that subserves object perception, and a dorsal occipitoparietal pathway that subserves object localization and visually guided action. Accumulating evidence from both human and non-human primate studies, however, challenges this binary distinction and suggests that regions in the dorsal pathway contain object representations that are independent of those in ventral cortex and that play a functional role in object perception. We review here the evidence implicating dorsal object representations, and we propose an account of the anatomical organization, functional contributions, and origins of these representations in the service of perception. Copyright © 2016 Elsevier Ltd. All rights reserved.
Whitwell, Robert L; Goodale, Melvyn A; Merritt, Kate E; Enns, James T
2018-01-01
The two visual systems hypothesis proposes that human vision is supported by an occipito-temporal network for the conscious visual perception of the world and a fronto-parietal network for visually-guided, object-directed actions. Two specific claims about the fronto-parietal network's role in sensorimotor control have generated much data and controversy: (1) the network relies primarily on the absolute metrics of target objects, which it rapidly transforms into effector-specific frames of reference to guide the fingers, hands, and limbs, and (2) the network is largely unaffected by scene-based information extracted by the occipito-temporal network for those same targets. These two claims lead to the counter-intuitive prediction that in-flight anticipatory configuration of the fingers during object-directed grasping will resist the influence of pictorial illusions. The research confirming this prediction has been criticized for confounding the difference between grasping and explicit estimates of object size with differences in attention, sensory feedback, obstacle avoidance, metric sensitivity, and priming. Here, we address and eliminate each of these confounds. We asked participants to reach out and pick up 3D target bars resting on a picture of the Sander Parallelogram illusion and to make explicit estimates of the length of those bars. Participants performed their grasps without visual feedback, and were permitted to grasp the targets after making their size-estimates to afford them an opportunity to reduce illusory error with haptic feedback. The results show unequivocally that the effect of the illusion is stronger on perceptual judgments than on grasping. Our findings from the normally-sighted population provide strong support for the proposal that human vision is comprised of functionally and anatomically dissociable systems. Copyright © 2017 Elsevier Ltd. All rights reserved.
Frings, Christian; Rothermund, Klaus
2017-11-01
Perception and action are closely related. Responses are assumed to be represented in terms of their perceptual effects, allowing direct links between action and perception. In this regard, the integration of features of stimuli (S) and responses (R) into S-R bindings is a key mechanism for action control. Previous research focused on the integration of object features with response features while neglecting the context in which an object is perceived. In 3 experiments, we analyzed whether contextual features can also become integrated into S-R episodes. The data showed that a fundamental principle of visual perception, figure-ground segmentation, modulates the binding of contextual features. Only features belonging to the figure region of a context but not features forming the background were integrated with responses into S-R episodes, retrieval of which later on had an impact upon behavior. Our findings suggest that perception guides the selection of context features for integration with responses into S-R episodes. Results of our study have wide-ranging implications for an understanding of context effects in learning and behavior. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Landa, Rebecca J.; Haworth, Joshua L.; Nebel, Mary Beth
2016-01-01
Children with autism spectrum disorder (ASD) demonstrate a host of motor impairments that may share a common developmental basis with ASD core symptoms. School-age children with ASD exhibit particular difficulty with hand-eye coordination and appear to be less sensitive to visual feedback during motor learning. Sensorimotor deficits are observable as early as 6 months of age in children who later develop ASD; yet the interplay of early motor, visual and social skill development in ASD is not well understood. Integration of visual input with motor output is vital for the formation of internal models of action. Such integration is necessary not only to master a wide range of motor skills, but also to imitate and interpret the actions of others. Thus, closer examination of the early development of visual-motor deficits is of critical importance to ASD. In the present study of infants at high risk (HR) and low risk (LR) for ASD, we examined visual-motor coupling, or action anticipation, during a dynamic, interactive ball-rolling activity. We hypothesized that, compared to LR infants, HR infants would display decreased anticipatory response (perception-guided predictive action) to the approaching ball. We also examined visual attention before and during ball rolling to determine whether attention engagement contributed to differences in anticipation. Results showed that LR and HR infants demonstrated context appropriate looking behavior, both before and during the ball’s trajectory toward them. However, HR infants were less likely to exhibit context appropriate anticipatory motor response to the approaching ball (moving their arm/hand to intercept the ball) than LR infants. This finding did not appear to be driven by differences in motor skill between risk groups at 6 months of age and was extended to show an atypical predictive relationship between anticipatory behavior at 6 months and preference for looking at faces compared to objects at age 14 months in the HR group. PMID:27252667
Experientally guided robots. [for planet exploration
NASA Technical Reports Server (NTRS)
Merriam, E. W.; Becker, J. D.
1974-01-01
This paper argues that an experientally guided robot is necessary to successfully explore far-away planets. Such a robot is characterized as having sense organs which receive sensory information from its environment and motor systems which allow it to interact with that environment. The sensori-motor information which it receives is organized into an experiential knowledge structure and this knowledge in turn is used to guide the robot's future actions. A summary is presented of a problem solving system which is being used as a test bed for developing such a robot. The robot currently engages in the behaviors of visual tracking, focusing down, and looking around in a simulated Martian landscape. Finally, some unsolved problems are outlined whose solutions are necessary before an experientally guided robot can be produced. These problems center around organizing the motivational and memory structure of the robot and understanding its high-level control mechanisms.
Karim, A K M Rezaul; Proulx, Michael J; Likova, Lora T
2016-09-01
Orientation bias and directionality bias are two fundamental functional characteristics of the visual system. Reviewing the relevant literature in visual psychophysics and visual neuroscience we propose here a three-stage model of directionality bias in visuospatial functioning. We call this model the 'Perception-Action-Laterality' (PAL) hypothesis. We analyzed the research findings for a wide range of visuospatial tasks, showing that there are two major directionality trends in perceptual preference: clockwise versus anticlockwise. It appears these preferences are combinatorial, such that a majority of people fall in the first category demonstrating a preference for stimuli/objects arranged from left-to-right rather than from right-to-left, while people in the second category show an opposite trend. These perceptual biases can guide sensorimotor integration and action, creating two corresponding turner groups in the population. In support of PAL, we propose another model explaining the origins of the biases - how the neurogenetic factors and the cultural factors interact in a biased competition framework to determine the direction and extent of biases. This dynamic model can explain not only the two major categories of biases in terms of direction and strength, but also the unbiased, unreliably biased or mildly biased cases in visuosptial functioning. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Wang, Quanxin; Burkhalter, Andreas
2013-01-23
Previous studies of intracortical connections in mouse visual cortex have revealed two subnetworks that resemble the dorsal and ventral streams in primates. Although calcium imaging studies have shown that many areas of the ventral stream have high spatial acuity whereas areas of the dorsal stream are highly sensitive for transient visual stimuli, there are some functional inconsistencies that challenge a simple grouping into "what/perception" and "where/action" streams known in primates. The superior colliculus (SC) is a major center for processing of multimodal sensory information and the motor control of orienting the eyes, head, and body. Visual processing is performed in superficial layers, whereas premotor activity is generated in deep layers of the SC. Because the SC is known to receive input from visual cortex, we asked whether the projections from 10 visual areas of the dorsal and ventral streams terminate in differential depth profiles within the SC. We found that inputs from primary visual cortex are by far the strongest. Projections from the ventral stream were substantially weaker, whereas the sparsest input originated from areas of the dorsal stream. Importantly, we found that ventral stream inputs terminated in superficial layers, whereas dorsal stream inputs tended to be patchy and either projected equally to superficial and deep layers or strongly preferred deep layers. The results suggest that the anatomically defined ventral and dorsal streams contain areas that belong to distinct functional systems, specialized for the processing of visual information and visually guided action, respectively.
Poon, Cynthia; Chin-Cottongim, Lisa G.; Coombes, Stephen A.; Corcos, Daniel M.
2012-01-01
It is well established that the prefrontal cortex is involved during memory-guided tasks whereas visually guided tasks are controlled in part by a frontal-parietal network. However, the nature of the transition from visually guided to memory-guided force control is not as well established. As such, this study examines the spatiotemporal pattern of brain activity that occurs during the transition from visually guided to memory-guided force control. We measured 128-channel scalp electroencephalography (EEG) in healthy individuals while they performed a grip force task. After visual feedback was removed, the first significant change in event-related activity occurred in the left central region by 300 ms, followed by changes in prefrontal cortex by 400 ms. Low-resolution electromagnetic tomography (LORETA) was used to localize the strongest activity to the left ventral premotor cortex and ventral prefrontal cortex. A second experiment altered visual feedback gain but did not require memory. In contrast to memory-guided force control, altering visual feedback gain did not lead to early changes in the left central and midline prefrontal regions. Decreasing the spatial amplitude of visual feedback did lead to changes in the midline central region by 300 ms, followed by changes in occipital activity by 400 ms. The findings show that subjects rely on sensorimotor memory processes involving left ventral premotor cortex and ventral prefrontal cortex after the immediate transition from visually guided to memory-guided force control. PMID:22696535
Karim, A.K.M. Rezaul; Proulx, Michael J.; Likova, Lora T.
2016-01-01
Reviewing the relevant literature in visual psychophysics and visual neuroscience we propose a three-stage model of directionality bias in visuospatial functioning. We call this model the ‘Perception-Action-Laterality’ (PAL) hypothesis. We analyzed the research findings for a wide range of visuospatial tasks, showing that there are two major directionality trends: clockwise versus anticlockwise. It appears these preferences are combinatorial, such that a majority of people fall in the first category demonstrating a preference for stimuli/objects arranged from left-to-right rather than from right-to-left, while people in the second category show an opposite trend. These perceptual biases can guide sensorimotor integration and action, creating two corresponding turner groups in the population. In support of PAL, we propose another model explaining the origins of the biases– how the neurogenetic factors and the cultural factors interact in a biased competition framework to determine the direction and extent of biases. This dynamic model can explain not only the two major categories of biases, but also the unbiased, unreliably biased or mildly biased cases in visuosptial functioning. PMID:27350096
Contextual specificity in perception and action
NASA Technical Reports Server (NTRS)
Proffitt, Dennis R.
1991-01-01
The visually guided control of helicopter flight is a human achievement, and, thus, understanding this skill is, in part, a psychological problem. The abilities of skilled pilots are impressive, and yet it is of concern that pilots' performance is less than ideal: they suffer from workload constraints, make occasional errors, and are subject to such debilities as simulator sickness. Remedying such deficiencies is both an engineering and a psychological problem. When studying the psychological aspects of this problem, it is desirable to simplify the problem as much as possible, and thereby, sidestep as many intractable psychological issues as possible. Simply stated, we do not want to have to resolve such polemics as the mind-body problem in order to contribute to the design of more effective helicopter systems. On the other hand, the study of human behavior is a psychological endeavor and certain problems cannot be evaded. Four related issues that are of psychological significance in understanding the visually guided control of helicopter flight are discussed. First, a selected discussion of the nature of descriptive levels in analyzing human perception and performance is presented. It is argued that the appropriate level of description for perception is kinematical, and for performance, it is procedural. Second, it is argued that investigations into pilot performance cannot ignore the nature of pilots' phenomenal experience. The conscious control of actions is not based upon environmental states of affairs, nor upon the optical information that specifies them. Actions are coupled to perceptions. Third, the acquisition of skilled actions in the context of inherent misperceptions is discussed. Such skills may be error prone in some situations, but not in others. Finally, I discuss the contextual relativity of human errors. Each of these four issues relates to a common theme: the control of action is mediated by phenomenal experience, the veracity of which is context specific.
Visual attention and stability
Mathôt, Sebastiaan; Theeuwes, Jan
2011-01-01
In the present review, we address the relationship between attention and visual stability. Even though with each eye, head and body movement the retinal image changes dramatically, we perceive the world as stable and are able to perform visually guided actions. However, visual stability is not as complete as introspection would lead us to believe. We attend to only a few items at a time and stability is maintained only for those items. There appear to be two distinct mechanisms underlying visual stability. The first is a passive mechanism: the visual system assumes the world to be stable, unless there is a clear discrepancy between the pre- and post-saccadic image of the region surrounding the saccade target. This is related to the pre-saccadic shift of attention, which allows for an accurate preview of the saccade target. The second is an active mechanism: information about attended objects is remapped within retinotopic maps to compensate for eye movements. The locus of attention itself, which is also characterized by localized retinotopic activity, is remapped as well. We conclude that visual attention is crucial in our perception of a stable world. PMID:21242140
Visual adaptation dominates bimodal visual-motor action adaptation
de la Rosa, Stephan; Ferstl, Ylva; Bülthoff, Heinrich H.
2016-01-01
A long standing debate revolves around the question whether visual action recognition primarily relies on visual or motor action information. Previous studies mainly examined the contribution of either visual or motor information to action recognition. Yet, the interaction of visual and motor action information is particularly important for understanding action recognition in social interactions, where humans often observe and execute actions at the same time. Here, we behaviourally examined the interaction of visual and motor action recognition processes when participants simultaneously observe and execute actions. We took advantage of behavioural action adaptation effects to investigate behavioural correlates of neural action recognition mechanisms. In line with previous results, we find that prolonged visual exposure (visual adaptation) and prolonged execution of the same action with closed eyes (non-visual motor adaptation) influence action recognition. However, when participants simultaneously adapted visually and motorically – akin to simultaneous execution and observation of actions in social interactions - adaptation effects were only modulated by visual but not motor adaptation. Action recognition, therefore, relies primarily on vision-based action recognition mechanisms in situations that require simultaneous action observation and execution, such as social interactions. The results suggest caution when associating social behaviour in social interactions with motor based information. PMID:27029781
Expert anticipatory skill in striking sports: a review and a model.
Müller, Sean; Abernethy, Bruce
2012-06-01
Expert performers in striking sports can hit objects moving at high speed with incredible precision. Exceptionally well developed anticipation skills are necessary to cope with the severe constraints on interception. In this papr we provide a review of the empirical evidence regarding expert interception in striking sports and propose a preliminary model of expert anticipation. Central to the review and the model is the notion that the visual information used to guide the sequential phases of the striking action is systematically different between experts and nonexperts. Knowing the factors that contribute to expert anticipation, and how anticipation may guide skilled performance in striking sports, has practical implications for assessment and training across skill levels.
Altered Connectivity and Action Model Formation in Autism Is Autism
Mostofsky, Stewart H.; Ewen, Joshua B.
2014-01-01
Internal action models refer to sensory-motor programs that form the brain basis for a wide range of skilled behavior and for understanding others’ actions. Development of these action models, particularly those reliant on visual cues from the external world, depends on connectivity between distant brain regions. Studies of children with autism reveal anomalous patterns of motor learning and impaired execution of skilled motor gestures. These findings robustly correlate with measures of social and communicative function, suggesting that anomalous action model formation may contribute to impaired development of social and communicative (as well as motor) capacity in autism. Examination of the pattern of behavioral findings, as well as convergent data from neuroimaging techniques, further suggests that autism-associated action model formation may be related to abnormalities in neural connectivity, particularly decreased function of long-range connections. This line of study can lead to important advances in understanding the neural basis of autism and, more critically, can be used to guide effective therapies targeted at improving social, communicative, and motor function. PMID:21467306
Meghdadi, Amir H; Irani, Pourang
2013-12-01
We propose a novel video visual analytics system for interactive exploration of surveillance video data. Our approach consists of providing analysts with various views of information related to moving objects in a video. To do this we first extract each object's movement path. We visualize each movement by (a) creating a single action shot image (a still image that coalesces multiple frames), (b) plotting its trajectory in a space-time cube and (c) displaying an overall timeline view of all the movements. The action shots provide a still view of the moving object while the path view presents movement properties such as speed and location. We also provide tools for spatial and temporal filtering based on regions of interest. This allows analysts to filter out large amounts of movement activities while the action shot representation summarizes the content of each movement. We incorporated this multi-part visual representation of moving objects in sViSIT, a tool to facilitate browsing through the video content by interactive querying and retrieval of data. Based on our interaction with security personnel who routinely interact with surveillance video data, we identified some of the most common tasks performed. This resulted in designing a user study to measure time-to-completion of the various tasks. These generally required searching for specific events of interest (targets) in videos. Fourteen different tasks were designed and a total of 120 min of surveillance video were recorded (indoor and outdoor locations recording movements of people and vehicles). The time-to-completion of these tasks were compared against a manual fast forward video browsing guided with movement detection. We demonstrate how our system can facilitate lengthy video exploration and significantly reduce browsing time to find events of interest. Reports from expert users identify positive aspects of our approach which we summarize in our recommendations for future video visual analytics systems.
System and method for controlling a vision guided robot assembly
Lin, Yhu-Tin; Daro, Timothy; Abell, Jeffrey A.; Turner, III, Raymond D.; Casoli, Daniel J.
2017-03-07
A method includes the following steps: actuating a robotic arm to perform an action at a start position; moving the robotic arm from the start position toward a first position; determining from a vision process method if a first part from the first position will be ready to be subjected to a first action by the robotic arm once the robotic arm reaches the first position; commencing the execution of the visual processing method for determining the position deviation of the second part from the second position and the readiness of the second part to be subjected to a second action by the robotic arm once the robotic arm reaches the second position; and performing a first action on the first part using the robotic arm with the position deviation of the first part from the first position predetermined by the vision process method.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-15
... ENVIRONMENTAL PROTECTION AGENCY [EPA-HQ-OAR-2007-0268; FRL-9833-5] Updates to Protective Action Guides Manual: Protective Action Guides (PAGs) and Planning Guidance for Radiological Incidents AGENCY: Environmental Protection Agency (EPA). ACTION: Proposed guidance; extension of comment period. SUMMARY: The U.S...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-15
... ENVIRONMENTAL PROTECTION AGENCY [EPA-HQ-OAR-2007-0268; FRL-9707-2] Updates to Protective Action Guides Manual: Protective Action Guides (PAGs) and Planning Guidance for Radiological Incidents AGENCY: Environmental Protection Agency (EPA). ACTION: Notice of document availability for interim use and public...
Sensory signals during active versus passive movement.
Cullen, Kathleen E
2004-12-01
Our sensory systems are simultaneously activated as the result of our own actions and changes in the external world. The ability to distinguish self-generated sensory events from those that arise externally is thus essential for perceptual stability and accurate motor control. Recently, progress has been made towards understanding how this distinction is made. It has been proposed that an internal prediction of the consequences of our actions is compared to the actual sensory input to cancel the resultant self-generated activation. Evidence in support of this hypothesis has been obtained for early stages of sensory processing in the vestibular, visual and somatosensory systems. These findings have implications for the sensory-motor transformations that are needed to guide behavior.
Shade determination using camouflaged visual shade guides and an electronic spectrophotometer.
Kvalheim, S F; Øilo, M
2014-03-01
The aim of the present study was to compare a camouflaged visual shade guide to a spectrophotometer designed for restorative dentistry. Two operators performed analyses of 66 subjects. One central upper incisor was measured four times by each operator; twice with a camouflaged visual shade guide and twice with a spectrophotometer Both methods had acceptable repeatability rates, but the electronic shade determination showed higher repeatability. In general, the electronically determined shades were darker than the visually determined shades. The use of a camouflaged visual shade guide seems to be an adequate method to reduce operator bias.
Bruno, Nicola; Uccelli, Stefano; Viviani, Eva; de'Sperati, Claudio
2016-10-01
According to a previous report, the visual coding of size does not obey Weber's law when aimed at guiding a grasp (Ganel et al., 2008a). This result has been interpreted as evidence for a fundamental difference between sensory processing in vision-for-perception, which needs to compress a wide range of physical objects to a restricted range of percepts, and vision-for-action when applied to the much narrower range of graspable and reachable objects. We compared finger aperture in a motor task (precision grip) and perceptual task (cross modal matching or "manual estimation" of the object's size). Crucially, we tested the whole range of graspable objects. We report that both grips and estimations clearly violate Weber's law with medium-to-large objects, but are essentially consistent with Weber's law with smaller objects. These results differ from previous characterizations of perception-action dissociations in the precision of representations of object size. Implications for current functional interpretations of the dorsal and ventral processing streams in the human visual system are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Parameswaran, Vidhya; Anilkumar, S; Lylajam, S; Rajesh, C; Narayan, Vivek
2016-01-01
This in vitro study compared the shade matching abilities of an intraoral spectrophotometer and the conventional visual method using two shade guides. The results of previous investigations between color perceived by human observers and color assessed by instruments have been inconclusive. The objectives were to determine accuracies and interrater agreement of both methods and effectiveness of two shade guides with either method. In the visual method, 10 examiners with normal color vision matched target control shade tabs taken from the two shade guides (VITAPAN Classical™ and VITAPAN 3D Master™) with other full sets of the respective shade guides. Each tab was matched 3 times to determine repeatability of visual examiners. The spectrophotometric shade matching was performed by two independent examiners using an intraoral spectrophotometer (VITA Easyshade™) with five repetitions for each tab. Results revealed that visual method had greater accuracy than the spectrophotometer. The spectrophotometer; however, exhibited significantly better interrater agreement as compared to the visual method. While VITAPAN Classical shade guide was more accurate with the spectrophotometer, VITAPAN 3D Master shade guide proved better with visual method. This in vitro study clearly delineates the advantages and limitations of both methods. There were significant differences between the methods with the visual method producing more accurate results than the spectrophotometric method. The spectrophotometer showed far better interrater agreement scores irrespective of the shade guide used. Even though visual shade matching is subjective, it is not inferior and should not be underrated. Judicious combination of both techniques is imperative to attain a successful and esthetic outcome.
The Relative Importance of Language in Guiding Social Preferences Through Development
Esseily, Rana; Somogyi, Eszter; Guellai, Bahia
2016-01-01
In this paper, we review evidence from infants, toddlers, and preschoolers to tackle the question of how individuals orient preferences and actions toward social partners and how these preferences change over development. We aim at emphasizing the importance of language in guiding categorization relatively to other cues such as age, race and gender. We discuss the importance of language as part of a communication system that orients infants and older children’s attention toward relevant information in their environment and toward affiliated social partners who are potential sources of knowledge. We argue that other cues (visually perceptible features) are less reliable in informing individuals whether others share a common knowledge and whether they can be source of information. PMID:27812345
The Relative Importance of Language in Guiding Social Preferences Through Development.
Esseily, Rana; Somogyi, Eszter; Guellai, Bahia
2016-01-01
In this paper, we review evidence from infants, toddlers, and preschoolers to tackle the question of how individuals orient preferences and actions toward social partners and how these preferences change over development. We aim at emphasizing the importance of language in guiding categorization relatively to other cues such as age, race and gender. We discuss the importance of language as part of a communication system that orients infants and older children's attention toward relevant information in their environment and toward affiliated social partners who are potential sources of knowledge. We argue that other cues (visually perceptible features) are less reliable in informing individuals whether others share a common knowledge and whether they can be source of information.
ERIC Educational Resources Information Center
American Friends Service Committee, Philadelphia, PA. National Action/Research on the Military Industrial Complex.
A study-action guide and a companion guide are intended to help citizens explore some of the challenging dilemmas of U.S. nuclear policy. The two guides place strong emphasis on group discussion and participation as well as action citizens might want to take to bring about a non-nuclear world. The companion guide is intended for congregations and…
Verspui, Remko; Gray, John R
2009-10-01
Animals rely on multimodal sensory integration for proper orientation within their environment. For example, odour-guided behaviours often require appropriate integration of concurrent visual cues. To gain a further understanding of mechanisms underlying sensory integration in odour-guided behaviour, our study examined the effects of visual stimuli induced by self-motion and object-motion on odour-guided flight in male M. sexta. By placing stationary objects (pillars) on either side of a female pheromone plume, moths produced self-induced visual motion during odour-guided flight. These flights showed a reduction in both ground and flight speeds and inter-turn interval when compared with flight tracks without stationary objects. Presentation of an approaching 20 cm disc, to simulate object-motion, resulted in interrupted odour-guided flight and changes in flight direction away from the pheromone source. Modifications of odour-guided flight behaviour in the presence of stationary objects suggest that visual information, in conjunction with olfactory cues, can be used to control the rate of counter-turning. We suggest that the behavioural responses to visual stimuli induced by object-motion indicate the presence of a neural circuit that relays visual information to initiate escape responses. These behavioural responses also suggest the presence of a sensory conflict requiring a trade-off between olfactory and visually driven behaviours. The mechanisms underlying olfactory and visual integration are discussed in the context of these behavioural responses.
Plastic Bags and Environmental Pollution
ERIC Educational Resources Information Center
Sang, Anita Ng Heung
2010-01-01
The "Hong Kong Visual Arts Curriculum Guide," covering Primary 1 to Secondary 3 grades (Curriculum Development Committee, 2003), points to three domains of learning in visual arts: (1) visual arts knowledge; (2) visual arts appreciation and criticism; and (3) visual arts making. The "Guide" suggests learning should develop…
Internal models and prediction of visual gravitational motion.
Zago, Myrka; McIntyre, Joseph; Senot, Patrice; Lacquaniti, Francesco
2008-06-01
Baurès et al. [Baurès, R., Benguigui, N., Amorim, M.-A., & Siegler, I. A. (2007). Intercepting free falling objects: Better use Occam's razor than internalize Newton's law. Vision Research, 47, 2982-2991] rejected the hypothesis that free-falling objects are intercepted using a predictive model of gravity. They argued instead for "a continuous guide for action timing" based on visual information updated till target capture. Here we show that their arguments are flawed, because they fail to consider the impact of sensori-motor delays on interception behaviour and the need for neural compensation of such delays. When intercepting a free-falling object, the delays can be overcome by a predictive model of the effects of gravity on target motion.
Contextual Cueing: Implicit Learning and Memory of Visual Context Guides Spatial Attention.
ERIC Educational Resources Information Center
Chun, Marvin M.; Jiang, Yuhong
1998-01-01
Six experiments involving a total of 112 college students demonstrate that a robust memory for visual context exists to guide spatial attention. Results show how implicit learning and memory of visual context can guide spatial attention toward task-relevant aspects of a scene. (SLD)
Threat captures attention but does not affect learning of contextual regularities.
Yamaguchi, Motonori; Harwood, Sarah L
2017-04-01
Some of the stimulus features that guide visual attention are abstract properties of objects such as potential threat to one's survival, whereas others are complex configurations such as visual contexts that are learned through past experiences. The present study investigated the two functions that guide visual attention, threat detection and learning of contextual regularities, in visual search. Search arrays contained images of threat and non-threat objects, and their locations were fixed on some trials but random on other trials. Although they were irrelevant to the visual search task, threat objects facilitated attention capture and impaired attention disengagement. Search time improved for fixed configurations more than for random configurations, reflecting learning of visual contexts. Nevertheless, threat detection had little influence on learning of the contextual regularities. The results suggest that factors guiding visual attention are different from factors that influence learning to guide visual attention.
Parameswaran, Vidhya; Anilkumar, S.; Lylajam, S.; Rajesh, C.; Narayan, Vivek
2016-01-01
Background and Objectives: This in vitro study compared the shade matching abilities of an intraoral spectrophotometer and the conventional visual method using two shade guides. The results of previous investigations between color perceived by human observers and color assessed by instruments have been inconclusive. The objectives were to determine accuracies and interrater agreement of both methods and effectiveness of two shade guides with either method. Methods: In the visual method, 10 examiners with normal color vision matched target control shade tabs taken from the two shade guides (VITAPAN Classical™ and VITAPAN 3D Master™) with other full sets of the respective shade guides. Each tab was matched 3 times to determine repeatability of visual examiners. The spectrophotometric shade matching was performed by two independent examiners using an intraoral spectrophotometer (VITA Easyshade™) with five repetitions for each tab. Results: Results revealed that visual method had greater accuracy than the spectrophotometer. The spectrophotometer; however, exhibited significantly better interrater agreement as compared to the visual method. While VITAPAN Classical shade guide was more accurate with the spectrophotometer, VITAPAN 3D Master shade guide proved better with visual method. Conclusion: This in vitro study clearly delineates the advantages and limitations of both methods. There were significant differences between the methods with the visual method producing more accurate results than the spectrophotometric method. The spectrophotometer showed far better interrater agreement scores irrespective of the shade guide used. Even though visual shade matching is subjective, it is not inferior and should not be underrated. Judicious combination of both techniques is imperative to attain a successful and esthetic outcome. PMID:27746599
Supèr, Hans; van der Togt, Chris; Spekreijse, Henk; Lamme, Victor A. F.
2004-01-01
We continuously scan the visual world via rapid or saccadic eye movements. Such eye movements are guided by visual information, and thus the oculomotor structures that determine when and where to look need visual information to control the eye movements. To know whether visual areas contain activity that may contribute to the control of eye movements, we recorded neural responses in the visual cortex of monkeys engaged in a delayed figure-ground detection task and analyzed the activity during the period of oculomotor preparation. We show that ≈100 ms before the onset of visually and memory-guided saccades neural activity in V1 becomes stronger where the strongest presaccadic responses are found at the location of the saccade target. In addition, in memory-guided saccades the strength of presaccadic activity shows a correlation with the onset of the saccade. These findings indicate that the primary visual cortex contains saccade-related responses and participates in visually guided oculomotor behavior. PMID:14970334
Automated vision occlusion-timing instrument for perception-action research.
Brenton, John; Müller, Sean; Rhodes, Robbie; Finch, Brad
2018-02-01
Vision occlusion spectacles are a highly valuable instrument for visual-perception-action research in a variety of disciplines. In sports, occlusion spectacles have enabled invaluable knowledge to be obtained about the superior capability of experts to use visual information to guide actions within in-situ settings. Triggering the spectacles to occlude a performer's vision at a precise time in an opponent's action or object flight has been problematic, due to experimenter error in using a manual buttonpress approach. This article describes a new laser curtain wireless trigger for vision occlusion spectacles that is portable and fast in terms of its transmission time. The laser curtain can be positioned in a variety of orientations to accept a motion trigger, such as a cricket bowler's arm that distorts the lasers, which then activates a wireless signal for the occlusion spectacles to change from transparent to opaque, which occurs in only 8 ms. Results are reported from calculations done in an electronics laboratory, as well as from tests in a performance laboratory with a cricket bowler and a baseball pitcher, which verified this short time delay before vision occlusion. In addition, our results show that occlusion consistently occurred when it was intended-that is, near ball release and during mid-ball-flight. Only 8% of the collected data trials were unusable. The laser curtain improves upon the limitations of existing vision occlusion spectacle triggers, indicating that it is a valuable instrument for perception-action research in a variety of disciplines.
Cognitive Control Network Contributions to Memory-Guided Visual Attention
Rosen, Maya L.; Stern, Chantal E.; Michalka, Samantha W.; Devaney, Kathryn J.; Somers, David C.
2016-01-01
Visual attentional capacity is severely limited, but humans excel in familiar visual contexts, in part because long-term memories guide efficient deployment of attention. To investigate the neural substrates that support memory-guided visual attention, we performed a set of functional MRI experiments that contrast long-term, memory-guided visuospatial attention with stimulus-guided visuospatial attention in a change detection task. Whereas the dorsal attention network was activated for both forms of attention, the cognitive control network (CCN) was preferentially activated during memory-guided attention. Three posterior nodes in the CCN, posterior precuneus, posterior callosal sulcus/mid-cingulate, and lateral intraparietal sulcus exhibited the greatest specificity for memory-guided attention. These 3 regions exhibit functional connectivity at rest, and we propose that they form a subnetwork within the broader CCN. Based on the task activation patterns, we conclude that the nodes of this subnetwork are preferentially recruited for long-term memory guidance of visuospatial attention. PMID:25750253
Makuuchi, Michiru; Someya, Yoshiaki; Ogawa, Seiji; Takayama, Yoshihiro
2011-01-01
In visually guided grasping, possible hand shapes are computed from the geometrical features of the object, while prior knowledge about the object and the goal of the action influence both the computation and the selection of the hand shape. We investigated the system dynamics of the human brain for the pantomiming of grasping with two aspects accentuated. One is object recognition, with the use of objects for daily use. The subjects mimed grasping movements appropriate for an object presented in a photograph either by precision or power grip. The other is the selection of grip hand shape. We manipulated the selection demands for the grip hand shape by having the subjects use the same or different grip type in the second presentation of the identical object. Effective connectivity analysis revealed that the increased selection demands enhance the interaction between the anterior intraparietal sulcus (AIP) and posterior inferior temporal gyrus (pITG), and drive the converging causal influences from the AIP, pITG, and dorsolateral prefrontal cortex to the ventral premotor area (PMv). These results suggest that the dorsal and ventral visual areas interact in the pantomiming of grasping, while the PMv integrates the neural information of different regions to select the hand posture. The present study proposes system dynamics in visually guided movement toward meaningful objects, but further research is needed to examine if the same dynamics is found also in real grasping. PMID:21739528
Protective Action Guides (PAGs)
The Protective Action Guide (PAG) manual contains radiation dose guidelines that would trigger public safety measures. EPA developed Protective Action Guides to help responders plan for radiation emergencies.
Deep recurrent neural network reveals a hierarchy of process memory during dynamic natural vision.
Shi, Junxing; Wen, Haiguang; Zhang, Yizhen; Han, Kuan; Liu, Zhongming
2018-05-01
The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision. © 2018 Wiley Periodicals, Inc.
[Cortical potentials evoked to response to a signal to make a memory-guided saccade].
Slavutskaia, M V; Moiseeva, V V; Shul'govskiĭ, V V
2010-01-01
The difference in parameters of visually guided and memory-guided saccades was shown. Increase in the memory-guided saccade latency as compared to that of the visually guided saccades may indicate the deceleration of saccadic programming on the basis of information extraction from the memory. The comparison of parameters and topography of evoked components N1 and P1 of the evoked potential on the signal to make a memory- or visually guided saccade suggests that the early stage of the saccade programming associated with the space information processing is performed predominantly with top-down attention mechanism before the memory-guided saccade and bottom-up mechanism before the visually guided saccade. The findings show that the increase in the latency of the memory-guided saccades is connected with decision making at the central stage of the saccade programming. We proposed that wave N2, which develops in the middle of the latent period of the memory-guided saccades, is correlated with this process. Topography and spatial dynamics of components N1, P1 and N2 testify that the memory-guided saccade programming is controlled by the frontal mediothalamic system of selective attention and left-hemispheric brain mechanisms of motor attention.
Visual Arts: A Guide to Curriculum Development in the Arts.
ERIC Educational Resources Information Center
Iowa State Dept. of Public Instruction, Des Moines.
This visual arts curriculum guide was developed as a subset of a model curriculum for the arts as mandated by the Iowa legislature. It is designed to be used in conjunction with the Visual Arts in Iowa Schools (VAIS). The guide is divided into six sections: Sections one and two contain the preface, acknowledgements, and a list of members of the…
The Role of Target-Distractor Relationships in Guiding Attention and the Eyes in Visual Search
ERIC Educational Resources Information Center
Becker, Stefanie I.
2010-01-01
Current models of visual search assume that visual attention can be guided by tuning attention toward specific feature values (e.g., particular size, color) or by inhibiting the features of the irrelevant nontargets. The present study demonstrates that attention and eye movements can also be guided by a relational specification of how the target…
Where to look? Automating attending behaviors of virtual human characters
NASA Technical Reports Server (NTRS)
Chopra Khullar, S.; Badler, N. I.
2001-01-01
This research proposes a computational framework for generating visual attending behavior in an embodied simulated human agent. Such behaviors directly control eye and head motions, and guide other actions such as locomotion and reach. The implementation of these concepts, referred to as the AVA, draws on empirical and qualitative observations known from psychology, human factors and computer vision. Deliberate behaviors, the analogs of scanpaths in visual psychology, compete with involuntary attention capture and lapses into idling or free viewing. Insights provided by implementing this framework are: a defined set of parameters that impact the observable effects of attention, a defined vocabulary of looking behaviors for certain motor and cognitive activity, a defined hierarchy of three levels of eye behavior (endogenous, exogenous and idling) and a proposed method of how these types interact.
Rehabilitation regimes based upon psychophysical studies of prosthetic vision
NASA Astrophysics Data System (ADS)
Chen, S. C.; Suaning, G. J.; Morley, J. W.; Lovell, N. H.
2009-06-01
Human trials of prototype visual prostheses have successfully elicited visual percepts (phosphenes) in the visual field of implant recipients blinded through retinitis pigmentosa and age-related macular degeneration. Researchers are progressing rapidly towards a device that utilizes individual phosphenes as the elementary building blocks to compose a visual scene. This form of prosthetic vision is expected, in the near term, to have low resolution, large inter-phosphene gaps, distorted spatial distribution of phosphenes, restricted field of view, an eccentrically located phosphene field and limited number of expressible luminance levels. In order to fully realize the potential of these devices, there needs to be a training and rehabilitation program which aims to assist the prosthesis recipients to understand what they are seeing, and also to adapt their viewing habits to optimize the performance of the device. Based on the literature of psychophysical studies in simulated and real prosthetic vision, this paper proposes a comprehensive, theoretical training regime for a prosthesis recipient: visual search, visual acuity, reading, face/object recognition, hand-eye coordination and navigation. The aim of these tasks is to train the recipients to conduct visual scanning, eccentric viewing and reading, discerning low-contrast visual information, and coordinating bodily actions for visual-guided tasks under prosthetic vision. These skills have been identified as playing an important role in making prosthetic vision functional for the daily activities of their recipients.
Visual Outcomes After LASIK Using Topography-Guided vs Wavefront-Guided Customized Ablation Systems.
Toda, Ikuko; Ide, Takeshi; Fukumoto, Teruki; Tsubota, Kazuo
2016-11-01
To evaluate the visual performance of two customized ablation systems (wavefront-guided ablation and topography-guided ablation) in LASIK. In this prospective, randomized clinical study, 68 eyes of 35 patients undergoing LASIK were enrolled. Patients were randomly assigned to wavefront-guided ablation using the iDesign aberrometer and STAR S4 IR Excimer Laser system (Abbott Medical Optics, Inc., Santa Ana, CA) (wavefront-guided group; 32 eyes of 16 patients; age: 29.0 ± 7.3 years) or topography-guided ablation using the OPD-Scan aberrometer and EC-5000 CXII excimer laser system (NIDEK, Tokyo, Japan) (topography-guided group; 36 eyes of 19 patients; age: 36.1 ± 9.6 years). Preoperative manifest refraction was -4.92 ± 1.95 diopters (D) in the wavefront-guided group and -4.44 ± 1.98 D in the topography-guided group. Visual function and subjective symptoms were compared between groups before and 1 and 3 months after LASIK. Of seven subjective symptoms evaluated, four were significantly milder in the wavefront-guided group at 3 months. Contrast sensitivity with glare off at low spatial frequencies (6.3° and 4°) was significantly higher in the wavefront-guided group. Uncorrected and corrected distance visual acuity, manifest refraction, and higher order aberrations measured by OPD-Scan and iDesign were not significantly different between the two groups at 1 and 3 months after LASIK. Both customized ablation systems used in LASIK achieved excellent results in predictability and visual function. The wavefront-guided ablation system may have some advantages in the quality of vision. It may be important to select the appropriate system depending on eye conditions such as the pattern of total and corneal higher order aberrations. [J Refract Surg. 2016;32(11):727-732.]. Copyright 2016, SLACK Incorporated.
Desantis, Andrea; Haggard, Patrick
2016-01-01
To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063
Desantis, Andrea; Haggard, Patrick
2016-12-16
To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events.
Getting a grip: different actions and visual guidance of the thumb and finger in precision grasping.
Melmoth, Dean R; Grant, Simon
2012-10-01
We manipulated the visual information available for grasping to examine what is visually guided when subjects get a precision grip on a common class of object (upright cylinders). In Experiment 1, objects (2 sizes) were placed at different eccentricities to vary the relative proximity to the participant's (n = 6) body of their thumb and finger contact positions in the final grip orientations, with vision available throughout or only for movement programming. Thumb trajectories were straighter and less variable than finger paths, and the thumb normally made initial contact with the objects at a relatively invariant landing site, but consistent thumb first-contacts were disrupted without visual guidance. Finger deviations were more affected by the object's properties and increased when vision was unavailable after movement onset. In Experiment 2, participants (n = 12) grasped 'glow-in-the-dark' objects wearing different luminous gloves in which the whole hand was visible or the thumb or the index finger was selectively occluded. Grip closure times were prolonged and thumb first-contacts disrupted when subjects could not see their thumb, whereas occluding the finger resulted in wider grips at contact because this digit remained distant from the object. Results were together consistent with visual feedback guiding the thumb in the period just prior to contacting the object, with the finger more involved in opening the grip and avoiding collision with the opposite contact surface. As people can overtly fixate only one object contact point at a time, we suggest that selecting one digit for online guidance represents an optimal strategy for initial grip placement. Other grasping tasks, in which the finger appears to be used for this purpose, are discussed.
Looking to Learn: The Effects of Visual Guidance on Observational Learning of the Golf Swing.
D'Innocenzo, Giorgia; Gonzalez, Claudia C; Williams, A Mark; Bishop, Daniel T
2016-01-01
Skilled performers exhibit more efficient gaze patterns than less-skilled counterparts do and they look more frequently at task-relevant regions than at superfluous ones. We examine whether we may guide novices' gaze towards relevant regions during action observation in order to facilitate their learning of a complex motor skill. In a Pre-test-Post-test examination of changes in their execution of the full golf swing, 21 novices viewed one of three videos at intervention: i) a skilled golfer performing 10 swings (Free Viewing, FV); ii) the same video with transient colour cues superimposed to highlight key features of the setup (Visual Guidance; VG); iii) or a History of Golf video (Control). Participants in the visual guidance group spent significantly more time looking at cued areas than did the other two groups, a phenomenon that persisted after the cues had been removed. Moreover, the visual guidance group improved their swing execution at Post-test and on a Retention test one week later. Our results suggest that visual guidance to cued areas during observational learning of complex motor skills may accelerate acquisition of the skill.
Looking to Learn: The Effects of Visual Guidance on Observational Learning of the Golf Swing
Gonzalez, Claudia C.; Williams, A. Mark
2016-01-01
Skilled performers exhibit more efficient gaze patterns than less-skilled counterparts do and they look more frequently at task-relevant regions than at superfluous ones. We examine whether we may guide novices’ gaze towards relevant regions during action observation in order to facilitate their learning of a complex motor skill. In a Pre-test-Post-test examination of changes in their execution of the full golf swing, 21 novices viewed one of three videos at intervention: i) a skilled golfer performing 10 swings (Free Viewing, FV); ii) the same video with transient colour cues superimposed to highlight key features of the setup (Visual Guidance; VG); iii) or a History of Golf video (Control). Participants in the visual guidance group spent significantly more time looking at cued areas than did the other two groups, a phenomenon that persisted after the cues had been removed. Moreover, the visual guidance group improved their swing execution at Post-test and on a Retention test one week later. Our results suggest that visual guidance to cued areas during observational learning of complex motor skills may accelerate acquisition of the skill. PMID:27224057
Differential effects of delay upon visually and haptically guided grasping and perceptual judgments.
Pettypiece, Charles E; Culham, Jody C; Goodale, Melvyn A
2009-05-01
Experiments with visual illusions have revealed a dissociation between the systems that mediate object perception and those responsible for object-directed action. More recently, an experiment on a haptic version of the visual size-contrast illusion has provided evidence for the notion that the haptic modality shows a similar dissociation when grasping and estimating the size of objects in real-time. Here we present evidence suggesting that the similarities between the two modalities begin to break down once a delay is introduced between when people feel the target object and when they perform the grasp or estimation. In particular, when grasping after a delay in a haptic paradigm, people scale their grasps differently when the target is presented with a flanking object of a different size (although the difference does not reflect a size-contrast effect). When estimating after a delay, however, it appears that people ignore the size of the flanking objects entirely. This does not fit well with the results commonly found in visual experiments. Thus, introducing a delay reveals important differences in the way in which haptic and visual memories are stored and accessed.
Five Steps for Structuring Data-Informed Conversations and Action in Education. REL 2013-001
ERIC Educational Resources Information Center
Kekahio, Wendy; Baker, Myriam
2013-01-01
Using data strategically to guide decisions and actions can have a positive effect on education practices and processes. This facilitation guide shows education data teams how to move beyond simply reporting data to applying data to direct strategic action. Using guiding questions, suggested activities, and activity forms, this guide provides…
A Cross-Modal Perspective on the Relationships between Imagery and Working Memory
Likova, Lora T.
2013-01-01
Mapping the distinctions and interrelationships between imagery and working memory (WM) remains challenging. Although each of these major cognitive constructs is defined and treated in various ways across studies, most accept that both imagery and WM involve a form of internal representation available to our awareness. In WM, there is a further emphasis on goal-oriented, active maintenance, and use of this conscious representation to guide voluntary action. Multicomponent WM models incorporate representational buffers, such as the visuo-spatial sketchpad, plus central executive functions. If there is a visuo-spatial “sketchpad” for WM, does imagery involve the same representational buffer? Alternatively, does WM employ an imagery-specific representational mechanism to occupy our awareness? Or do both constructs utilize a more generic “projection screen” of an amodal nature? To address these issues, in a cross-modal fMRI study, I introduce a novel Drawing-Based Memory Paradigm, and conceptualize drawing as a complex behavior that is readily adaptable from the visual to non-visual modalities (such as the tactile modality), which opens intriguing possibilities for investigating cross-modal learning and plasticity. Blindfolded participants were trained through our Cognitive-Kinesthetic Method (Likova, 2010a, 2012) to draw complex objects guided purely by the memory of felt tactile images. If this WM task had been mediated by transfer of the felt spatial configuration to the visual imagery mechanism, the response-profile in visual cortex would be predicted to have the “top-down” signature of propagation of the imagery signal downward through the visual hierarchy. Remarkably, the pattern of cross-modal occipital activation generated by the non-visual memory drawing was essentially the inverse of this typical imagery signature. The sole visual hierarchy activation was isolated to the primary visual area (V1), and accompanied by deactivation of the entire extrastriate cortex, thus ’cutting-off’ any signal propagation from/to V1 through the visual hierarchy. The implications of these findings for the debate on the interrelationships between the core cognitive constructs of WM and imagery and the nature of internal representations are evaluated. PMID:23346061
Cognitive Control Network Contributions to Memory-Guided Visual Attention.
Rosen, Maya L; Stern, Chantal E; Michalka, Samantha W; Devaney, Kathryn J; Somers, David C
2016-05-01
Visual attentional capacity is severely limited, but humans excel in familiar visual contexts, in part because long-term memories guide efficient deployment of attention. To investigate the neural substrates that support memory-guided visual attention, we performed a set of functional MRI experiments that contrast long-term, memory-guided visuospatial attention with stimulus-guided visuospatial attention in a change detection task. Whereas the dorsal attention network was activated for both forms of attention, the cognitive control network(CCN) was preferentially activated during memory-guided attention. Three posterior nodes in the CCN, posterior precuneus, posterior callosal sulcus/mid-cingulate, and lateral intraparietal sulcus exhibited the greatest specificity for memory-guided attention. These 3 regions exhibit functional connectivity at rest, and we propose that they form a subnetwork within the broader CCN. Based on the task activation patterns, we conclude that the nodes of this subnetwork are preferentially recruited for long-term memory guidance of visuospatial attention. Published by Oxford University Press 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Visual short-term memory guides infants' visual attention.
Mitsven, Samantha G; Cantrell, Lisa M; Luck, Steven J; Oakes, Lisa M
2018-08-01
Adults' visual attention is guided by the contents of visual short-term memory (VSTM). Here we asked whether 10-month-old infants' (N = 41) visual attention is also guided by the information stored in VSTM. In two experiments, we modified the one-shot change detection task (Oakes, Baumgartner, Barrett, Messenger, & Luck, 2013) to create a simplified cued visual search task to ask how information stored in VSTM influences where infants look. A single sample item (e.g., a colored circle) was presented at fixation for 500 ms, followed by a brief (300 ms) retention interval and then a test array consisting of two items, one on each side of fixation. One item in the test array matched the sample stimulus and the other did not. Infants were more likely to look at the non-matching item than at the matching item, demonstrating that the information stored rapidly in VSTM guided subsequent looking behavior. Copyright © 2018 Elsevier B.V. All rights reserved.
The extreme relativity of perception: A new contextual effect modulates human resolving power.
Namdar, Gal; Ganel, Tzvi; Algom, Daniel
2016-04-01
The authors report the discovery of a new effect of context that modulates human resolving power with respect to an individual stimulus. They show that the size of the difference threshold or the just noticeable difference around a standard stimulus depends on the range of the other standards tested simultaneously for resolution within the same experimental session. The larger this range, the poorer the resolving power for a given standard. The authors term this effect the range of standards effect (RSE). They establish this result both in the visual domain for the perception of linear extent, and in the somatosensory domain for the perception of weight. They discuss the contingent nature of stimulus resolution in perception and psychophysics and contrast it with the immunity to contextual influences of visually guided action. (c) 2016 APA, all rights reserved).
de la Rosa, Stephan; Ekramnia, Mina; Bülthoff, Heinrich H.
2016-01-01
The ability to discriminate between different actions is essential for action recognition and social interactions. Surprisingly previous research has often probed action recognition mechanisms with tasks that did not require participants to discriminate between actions, e.g., left-right direction discrimination tasks. It is not known to what degree visual processes in direction discrimination tasks are also involved in the discrimination of actions, e.g., when telling apart a handshake from a high-five. Here, we examined whether action discrimination is influenced by movement direction and whether direction discrimination depends on the type of action. We used an action adaptation paradigm to target action and direction discrimination specific visual processes. In separate conditions participants visually adapted to forward and backward moving handshake and high-five actions. Participants subsequently categorized either the action or the movement direction of an ambiguous action. The results showed that direction discrimination adaptation effects were modulated by the type of action but action discrimination adaptation effects were unaffected by movement direction. These results suggest that action discrimination and direction categorization rely on partly different visual information. We propose that action discrimination tasks should be considered for the exploration of visual action recognition mechanisms. PMID:26941633
Scene perception and the visual control of travel direction in navigating wood ants
Collett, Thomas S.; Lent, David D.; Graham, Paul
2014-01-01
This review reflects a few of Mike Land's many and varied contributions to visual science. In it, we show for wood ants, as Mike has done for a variety of animals, including readers of this piece, what can be learnt from a detailed analysis of an animal's visually guided eye, head or body movements. In the case of wood ants, close examination of their body movements, as they follow visually guided routes, is starting to reveal how they perceive and respond to their visual world and negotiate a path within it. We describe first some of the mechanisms that underlie the visual control of their paths, emphasizing that vision is not the ant's only sense. In the second part, we discuss how remembered local shape-dependent and global shape-independent features of a visual scene may interact in guiding the ant's path. PMID:24395962
Anderson, Joe; Bingham, Geoffrey P
2010-09-01
We provide a solution to a major problem in visually guided reaching. Research has shown that binocular vision plays an important role in the online visual guidance of reaching, but the visual information and strategy used to guide a reach remains unknown. We propose a new theory of visual guidance of reaching including a new information variable, tau(alpha) (relative disparity tau), and a novel control strategy that allows actors to guide their reach trajectories visually by maintaining a constant proportion between tau(alpha) and its rate of change. The dynamical model couples the information to the reaching movement to generate trajectories characteristic of human reaching. We tested the theory in two experiments in which participants reached under conditions of darkness to guide a visible point either on a sliding apparatus or on their finger to a point-light target in depth. Slider apparatus controlled for a simple mapping from visual to proprioceptive space. When reaching with their finger, participants were forced, by perturbation of visual information used for feedforward control, to use online control with only binocular disparity-based information for guidance. Statistical analyses of trajectories strongly supported the theory. Simulations of the model were compared statistically to actual reaching trajectories. The results supported the theory, showing that tau(alpha) provides a source of information for the control of visually guided reaching and that participants use this information in a proportional rate control strategy.
Mechanisms underlying selecting objects for action
Wulff, Melanie; Laverick, Rosanna; Humphreys, Glyn W.; Wing, Alan M.; Rotshtein, Pia
2015-01-01
We assessed the factors which affect the selection of objects for action, focusing on the role of action knowledge and its modulation by distracters. Fourteen neuropsychological patients and 10 healthy aged-matched controls selected pairs of objects commonly used together among distracters in two contexts: with real objects and with pictures of the same objects presented sequentially on a computer screen. Across both tasks, semantically related distracters led to slower responses and more errors than unrelated distracters and the object actively used for action was selected prior to the object that would be passively held during the action. We identified a sub-group of patients (N = 6) whose accuracy was 2SDs below the controls performances in the real object task. Interestingly, these impaired patients were more affected by the presence of unrelated distracters during both tasks than intact patients and healthy controls. Note that the impaired patients had lesions to left parietal, right anterior temporal and bilateral pre-motor regions. We conclude that: (1) motor procedures guide object selection for action, (2) semantic knowledge affects action-based selection, (3) impaired action decision making is associated with the inability to ignore distracting information and (4) lesions to either the dorsal or ventral visual stream can lead to deficits in making action decisions. Overall, the data indicate that impairments in everyday tasks can be evaluated using a simulated computer task. The implications for rehabilitation are discussed. PMID:25954177
AMERICAN STANDARD GUIDE FOR SCHOOL LIGHTING.
ERIC Educational Resources Information Center
Illuminating Engineering Society, New York, NY.
THIS IS A GUIDE FOR SCHOOL LIGHTING, DESIGNED FOR EDUCATORS AS WELL AS ARCHITECTS. IT MAKES USE OF RECENT RESEARCH, NOTABLY THE BLACKWELL REPORT ON EVALUATION OF VISUAL TASKS. THE GUIDE BEGINS WITH AN OVERVIEW OF CHANGING GOALS AND NEEDS OF SCHOOL LIGHTING, AND A TABULATION OF COMMON CLASSROOM VISUAL TASKS THAT REQUIRE VARIATIONS IN LIGHTING.…
Top-down contextual knowledge guides visual attention in infancy.
Tummeltshammer, Kristen; Amso, Dima
2017-10-26
The visual context in which an object or face resides can provide useful top-down information for guiding attention orienting, object recognition, and visual search. Although infants have demonstrated sensitivity to covariation in spatial arrays, it is presently unclear whether they can use rapidly acquired contextual knowledge to guide attention during visual search. In this eye-tracking experiment, 6- and 10-month-old infants searched for a target face hidden among colorful distracter shapes. Targets appeared in Old or New visual contexts, depending on whether the visual search arrays (defined by the spatial configuration, shape and color of component items in the search display) were repeated or newly generated throughout the experiment. Targets in Old contexts appeared in the same location within the same configuration, such that context covaried with target location. Both 6- and 10-month-olds successfully distinguished between Old and New contexts, exhibiting faster search times, fewer looks at distracters, and more anticipation of targets when contexts repeated. This initial demonstration of contextual cueing effects in infants indicates that they can use top-down information to facilitate orienting during memory-guided visual search. © 2017 John Wiley & Sons Ltd.
Animal Preparations to Assess Neurophysiological Effects of Bio-Dynamic Environments.
1980-07-17
deprivation in preventing the acquisition of visually-guided behaviors. The next study examined acquisition of visually-guided behaviors in six animals...Maffei, L. and Bisti, S. Binocular interaction in strabismic kittens deprived of vision. Science, 191, 579-580, 1976. Matin, L. A possible hybrid...function in cat visual cortex following prolonged deprivation . Exp. Brain Res., 25 (1976) 139-156. Hein, A. Visually controlled components of movement
Soares, Sandra C.; Maior, Rafael S.; Isbell, Lynne A.; Tomaz, Carlos; Nishijo, Hisao
2017-01-01
Primates are distinguished from other mammals by their heavy reliance on the visual sense, which occurred as a result of natural selection continually favoring those individuals whose visual systems were more responsive to challenges in the natural world. Here we describe two independent but also interrelated visual systems, one cortical and the other subcortical, both of which have been modified and expanded in primates for different functions. Available evidence suggests that while the cortical visual system mainly functions to give primates the ability to assess and adjust to fluid social and ecological environments, the subcortical visual system appears to function as a rapid detector and first responder when time is of the essence, i.e., when survival requires very quick action. We focus here on the subcortical visual system with a review of behavioral and neurophysiological evidence that demonstrates its sensitivity to particular, often emotionally charged, ecological and social stimuli, i.e., snakes and fearful and aggressive facial expressions in conspecifics. We also review the literature on subcortical involvement during another, less emotional, situation that requires rapid detection and response—visually guided reaching and grasping during locomotion—to further emphasize our argument that the subcortical visual system evolved as a rapid detector/first responder, a function that remains in place today. Finally, we argue that investigating deficits in this subcortical system may provide greater understanding of Parkinson's disease and Autism Spectrum disorders (ASD). PMID:28261046
Functional neural substrates of posterior cortical atrophy patients.
Shames, H; Raz, N; Levin, Netta
2015-07-01
Posterior cortical atrophy (PCA) is a neurodegenerative syndrome in which the most pronounced pathologic involvement is in the occipito-parietal visual regions. Herein, we aimed to better define the cortical reflection of this unique syndrome using a thorough battery of behavioral and functional MRI (fMRI) tests. Eight PCA patients underwent extensive testing to map their visual deficits. Assessments included visual functions associated with lower and higher components of the cortical hierarchy, as well as dorsal- and ventral-related cortical functions. fMRI was performed on five patients to examine the neuronal substrate of their visual functions. The PCA patient cohort exhibited stereopsis, saccadic eye movements and higher dorsal stream-related functional impairments, including simultant perception, image orientation, figure-from-ground segregation, closure and spatial orientation. In accordance with the behavioral findings, fMRI revealed intact activation in the ventral visual regions of face and object perception while more dorsal aspects of perception, including motion and gestalt perception, revealed impaired patterns of activity. In most of the patients, there was a lack of activity in the word form area, which is known to be linked to reading disorders. Finally, there was evidence of reduced cortical representation of the peripheral visual field, corresponding to the behaviorally assessed peripheral visual deficit. The findings are discussed in the context of networks extending from parietal regions, which mediate navigationally related processing, visually guided actions, eye movement control and working memory, suggesting that damage to these networks might explain the wide range of deficits in PCA patients.
The effect of different brightness conditions on visually and memory guided saccades.
Felßberg, Anna-Maria; Dombrowe, Isabel
2018-01-01
It is commonly assumed that saccades in the dark are slower than saccades in a lit room. Early studies that investigated this issue using electrooculography (EOG) often compared memory guided saccades in darkness to visually guided saccades in an illuminated room. However, later studies showed that memory guided saccades are generally slower than visually guided saccades. Research on this topic is further complicated by the fact that the different existing eyetracking methods do not necessarily lead to consistent measurements. In the present study, we independently manipulated task (memory guided/visually guided) and screen brightness (dark, medium and light) in an otherwise completely dark room, and measured the peak velocity and the duration of the participant's saccades using a popular pupil-cornea reflection (p-cr) eyetracker (Eyelink 1000). Based on a critical reading of the literature, including a recent study using cornea-reflection (cr) eye tracking, we did not expect any velocity or duration differences between the three brightness conditions. We found that memory guided saccades were generally slower than visually guided saccades. In both tasks, eye movements on a medium and light background were equally fast and had similar durations. However, saccades on the dark background were slower and had shorter durations, even after we corrected for the effect of pupil size changes. This means that this is most likely an artifact of current pupil-based eye tracking. We conclude that the common assumption that saccades in the dark are slower than in the light is probably not true, however pupil-based eyetrackers tend to underestimate the peak velocity of saccades on very dark backgrounds, creating the impression that this might be the case. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rafique, Sara A; Northway, Nadia
2015-08-01
Ocular accommodation provides a well-focussed image, feedback for accurate eye movement control, and cues for depth perception. To accurately perform visually guided motor tasks, integration of ocular motor systems is essential. Children with motor coordination impairment are established to be at higher risk of accommodation anomalies. The aim of the present study was to examine the relationship between ocular accommodation and motor tasks, which are often overlooked, in order to better understand the problems experienced by children with motor coordination impairment. Visual function, gross and fine motor skills were assessed in children with developmental coordination disorder (DCD) and typically developing control children. Children with DCD had significantly poorer accommodation facility and amplitude dynamics compared to controls. Results indicate a relationship between impaired accommodation and motor skills. Specifically, accommodation anomalies correlated with visual motor, upper limb and fine dexterity task performance. Consequently, we argue accommodation anomalies influence the ineffective coordination of action and perception in DCD. Furthermore, reading disabilities were related to poorer motor performance. We postulate the role of the fastigial nucleus as a common pathway for accommodation and motor deficits. Implications of the findings and recommended visual screening protocols are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Looking and touching: What extant approaches reveal about the structure of early word knowledge
Hendrickson, Kristi; Mitsven, Samantha; Poulin-Dubois, Diane; Zesiger, Pascal; Friend, Margaret
2014-01-01
The goal of the current study is to assess the temporal dynamics of vision and action to evaluate the underlying word representations that guide infants’ responses. Sixteen-month-old infants participated in a two-alternative forced-choice word-picture matching task. We conducted a moment-by-moment analysis of looking and reaching behaviors as they occurred in tandem to assess the speed with which a prompted word was processed (visual reaction time) as a function of the type of haptic response: Target, Distractor, or No Touch. Visual reaction times (visual RTs) were significantly slower during No Touches compared to Distractor and Target Touches, which were statistically indistinguishable. The finding that visual RTs were significantly faster during Distractor Touches compared to No Touches suggests that incorrect and absent haptic responses appear to index distinct knowledge states: incorrect responses are associated with partial knowledge whereas absent responses appear to reflect a true failure to map lexical items to their target referents. Further, we found that those children who were faster at processing words were also those children who exhibited better haptic performance. This research provides a methodological clarification on knowledge measured by the visual and haptic modalities and new evidence for a continuum of word knowledge in the second year of life. PMID:25444711
Focus on Hinduism: Audio-Visual Resources for Teaching Religion. Occasional Publication No. 23.
ERIC Educational Resources Information Center
Dell, David; And Others
The guide presents annotated lists of audio and visual materials about the Hindu religion. The authors point out that Hinduism cannot be comprehended totally by reading books; thus the resources identified in this guide will enhance understanding based on reading. The guide is intended for use by high school and college students, teachers,…
Sensor-Based Electromagnetic Navigation (Mediguide®): How Accurate Is It? A Phantom Model Study.
Bourier, Felix; Reents, Tilko; Ammar-Busch, Sonia; Buiatti, Alessandra; Grebmer, Christian; Telishevska, Marta; Brkic, Amir; Semmler, Verena; Lennerz, Carsten; Kaess, Bernhard; Kottmaier, Marc; Kolb, Christof; Deisenhofer, Isabel; Hessling, Gabriele
2015-10-01
Data about localization reproducibility as well as spatial and visual accuracy of the new MediGuide® sensor-based electroanatomic navigation technology are scarce. We therefore sought to quantify these parameters based on phantom experiments. A realistic heart phantom was generated in a 3D-Printer. A CT scan was performed on the phantom. The phantom itself served as ground-truth reference to ensure exact and reproducible catheter placement. A MediGuide® catheter was repeatedly tagged at selected positions to assess accuracy of point localization. The catheter was also used to acquire a MediGuide®-scaled geometry in the EnSite Velocity® electroanatomic mapping system. The acquired geometries (MediGuide®-scaled and EnSite Velocity®-scaled) were compared to a CT segmentation of the phantom to quantify concordance. Distances between landmarks were measured in the EnSite Velocity®- and MediGuide®-scaled geometry and the CT dataset for Bland-Altman comparison. The visualization of virtual MediGuide® catheter tips was compared to their corresponding representation on fluoroscopic cine-loops. Point localization accuracy was 0.5 ± 0.3 mm for MediGuide® and 1.4 ± 0.7 mm for EnSite Velocity®. The 3D accuracy of the geometries was 1.1 ± 1.4 mm (MediGuide®-scaled) and 3.2 ± 1.6 mm (not MediGuide®-scaled). The offset between virtual MediGuide® catheter visualization and catheter representation on corresponding fluoroscopic cine-loops was 0.4 ± 0.1 mm. The MediGuide® system shows a very high level of accuracy regarding localization reproducibility as well as spatial and visual accuracy, which can be ascribed to the magnetic field localization technology. The observed offsets between the geometry visualization and the real phantom are below a clinically relevant threshold. © 2015 Wiley Periodicals, Inc.
Modeling the role of parallel processing in visual search.
Cave, K R; Wolfe, J M
1990-04-01
Treisman's Feature Integration Theory and Julesz's Texton Theory explain many aspects of visual search. However, these theories require that parallel processing mechanisms not be used in many visual searches for which they would be useful, and they imply that visual processing should be much slower than it is. Most importantly, they cannot account for recent data showing that some subjects can perform some conjunction searches very efficiently. Feature Integration Theory can be modified so that it accounts for these data and helps to answer these questions. In this new theory, which we call Guided Search, the parallel stage guides the serial stage as it chooses display elements to process. A computer simulation of Guided Search produces the same general patterns as human subjects in a number of different types of visual search.
Consumer Control Points: Creating a Visual Food Safety Education Model for Consumers.
ERIC Educational Resources Information Center
Schiffman, Carole B.
Consumer education has always been a primary consideration in the prevention of food-borne illness. Using nutrition education and the new food guide as a model, this paper develops suggestions for a framework of microbiological food safety principles and a compatible visual model for communicating key concepts. Historically, visual food guides in…
Colorado Multicultural Resources for Arts Education: Dance, Music, Theatre, and Visual Art.
ERIC Educational Resources Information Center
Cassio, Charles J., Ed.
This Colorado resource guide is based on the premise that the arts (dance, music, theatre, and visual art) provide a natural arena for teaching multiculturalism to students of all ages. The guide provides information to Colorado schools about printed, disc, video, and audio tape visual prints, as well as about individuals and organizations that…
Visual adaptation enhances action sound discrimination.
Barraclough, Nick E; Page, Steve A; Keefe, Bruce D
2017-01-01
Prolonged exposure, or adaptation, to a stimulus in 1 modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in 1 modality can bias perception in another modality. Here, we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perception. We found that when compared to no adaptation, prior adaptation to visual, auditory, or audiovisual hand actions enhanced discrimination between 2 subsequently presented hand action sounds. Discrimination was most enhanced when the visual action "matched" the auditory action. In addition, prior adaptation to a visual, auditory, or audiovisual action caused subsequent ambiguous action sounds to be perceived as less like the adaptor. In contrast, these crossmodal action aftereffects were not generated by adaptation to the names of actions. Enhanced crossmodal discrimination and crossmodal perceptual aftereffects may result from separate mechanisms operating in audiovisual action sensitive neurons within perceptual systems. Adaptation-induced crossmodal enhancements cannot be explained by postperceptual responses or decisions. More generally, these results together indicate that adaptation is a ubiquitous mechanism for optimizing perceptual processing of multisensory stimuli.
Kinesthesis can make an invisible hand visible
Dieter, Kevin C.; Hu, Bo; Knill, David C.; Blake, Randolph; Tadin, Duje
2014-01-01
Self-generated body movements have reliable visual consequences. This predictive association between vision and action likely underlies modulatory effects of action on visual processing. However, it is unknown if our own actions can have generative effects on visual perception. We asked whether, in total darkness, self-generated body movements are sufficient to evoke normally concomitant visual perceptions. Using a deceptive experimental design, we discovered that waving one’s own hand in front of one’s covered eyes can cause visual sensations of motion. Conjecturing that these visual sensations arise from multisensory connectivity, we showed that individuals with synesthesia experience substantially stronger kinesthesis-induced visual sensations. Finally, we found that the perceived vividness of kinesthesis-induced visual sensations predicted participants’ ability to smoothly eye-track self-generated hand movements in darkness, indicating that these sensations function like typical retinally-driven visual sensations. Evidently, even in the complete absence of external visual input, our brains predict visual consequences of our actions. PMID:24171930
Rogers, Donna R B; Ei, Sue; Rogers, Kim R; Cross, Chad L
2007-05-01
This pilot study examines the use of guided visualizations that incorporate both cognitive and behavioral techniques with vibroacoustic therapy and cranial electrotherapy stimulation to form a multi-component therapeutic approach. This multi-component approach to cognitive-behavioral therapy (CBT) was used to treat patients presenting with a range of symptoms including anxiety, depression, and relationship difficulties. Clients completed a pre- and post-session symptom severity scale and CBT skills practice survey. The program consisted of 16 guided visualizations incorporating CBT techniques that were accompanied by vibroacoustic therapy and cranial electrotherapy stimulation. Significant reduction in symptom severity was observed in pre- and post-session scores for anxiety symptoms, relationship difficulties, and depressive symptoms. The majority of the clients (88%) reported use of CBT techniques learned in the guided visualizations at least once per week outside of the sessions.
Towards Guided Underwater Survey Using Light Visual Odometry
NASA Astrophysics Data System (ADS)
Nawaf, M. M.; Drap, P.; Royer, J. P.; Merad, D.; Saccone, M.
2017-02-01
A light distributed visual odometry method adapted to embedded hardware platform is proposed. The aim is to guide underwater surveys in real time. We rely on image stream captured using portable stereo rig attached to the embedded system. Taken images are analyzed on the fly to assess image quality in terms of sharpness and lightness, so that immediate actions can be taken accordingly. Images are then transferred over the network to another processing unit to compute the odometry. Relying on a standard ego-motion estimation approach, we speed up points matching between image quadruplets using a low level points matching scheme relying on fast Harris operator and template matching that is invariant to illumination changes. We benefit from having the light source attached to the hardware platform to estimate a priori rough depth belief following light divergence over distance low. The rough depth is used to limit points correspondence search zone as it linearly depends on disparity. A stochastic relative bundle adjustment is applied to minimize re-projection errors. The evaluation of the proposed method demonstrates the gain in terms of computation time w.r.t. other approaches that use more sophisticated feature descriptors. The built system opens promising areas for further development and integration of embedded computer vision techniques.
ERIC Educational Resources Information Center
Department of Justice, Washington, DC. Civil Rights Div.
This item consists of three separate "Technical Assistance Guides" combined into one document because they all are concerned with improving access to information for handicapped people. Specifically, the three guides provide: (1) information to enable hearing impaired, visually impaired, and mobility impaired persons to have access to public…
The visual analysis of emotional actions.
Chouchourelou, Arieta; Matsuka, Toshihiko; Harber, Kent; Shiffrar, Maggie
2006-01-01
Is the visual analysis of human actions modulated by the emotional content of those actions? This question is motivated by a consideration of the neuroanatomical connections between visual and emotional areas. Specifically, the superior temporal sulcus (STS), known to play a critical role in the visual detection of action, is extensively interconnected with the amygdala, a center for emotion processing. To the extent that amygdala activity influences STS activity, one would expect to find systematic differences in the visual detection of emotional actions. A series of psychophysical studies tested this prediction. Experiment 1 identified point-light walker movies that convincingly depicted five different emotional states: happiness, sadness, neutral, anger, and fear. In Experiment 2, participants performed a walker detection task with these movies. Detection performance was systematically modulated by the emotional content of the gaits. Participants demonstrated the greatest visual sensitivity to angry walkers. The results of Experiment 3 suggest that local velocity cues to anger may account for high false alarm rates to the presence of angry gaits. These results support the hypothesis that the visual analysis of human action depends upon emotion processes.
Contextual cueing: implicit learning and memory of visual context guides spatial attention.
Chun, M M; Jiang, Y
1998-06-01
Global context plays an important, but poorly understood, role in visual tasks. This study demonstrates that a robust memory for visual context exists to guide spatial attention. Global context was operationalized as the spatial layout of objects in visual search displays. Half of the configurations were repeated across blocks throughout the entire session, and targets appeared within consistent locations in these arrays. Targets appearing in learned configurations were detected more quickly. This newly discovered form of search facilitation is termed contextual cueing. Contextual cueing is driven by incidentally learned associations between spatial configurations (context) and target locations. This benefit was obtained despite chance performance for recognizing the configurations, suggesting that the memory for context was implicit. The results show how implicit learning and memory of visual context can guide spatial attention towards task-relevant aspects of a scene.
A Study of the Development of the Concept of Mechanical Stability in Preschool Children
NASA Astrophysics Data System (ADS)
Hadzigeorgiou, Yannis
2002-06-01
The purpose of this study was to investigate whether preschool children (aged 4.5-6 years) can construct the concept of mechanical stability through structured hands-on activities involving the building of a tower on an inclined plane and through the use of cans of various sizes and weights. The data derived mainly from direct observation and the visual component of video tape recordings of thirty-seven children. These children formed three treatment groups which participated in structured-guided, structured-unguided and unstructured-unguided activities respectively. There is strong evidence that appropriately structured activities involving children's action on objects and the objects immediate reaction, as well as children's opportunity to vary this action, complemented with a scaffolding strategy can help children construct the concept of mechanical stability and apply it in other similar contexts. The paper also presents a theoretical framework for the teaching and learning of physics in the early years.
Mastering algebra retrains the visual system to perceive hierarchical structure in equations.
Marghetis, Tyler; Landy, David; Goldstone, Robert L
2016-01-01
Formal mathematics is a paragon of abstractness. It thus seems natural to assume that the mathematical expert should rely more on symbolic or conceptual processes, and less on perception and action. We argue instead that mathematical proficiency relies on perceptual systems that have been retrained to implement mathematical skills. Specifically, we investigated whether the visual system-in particular, object-based attention-is retrained so that parsing algebraic expressions and evaluating algebraic validity are accomplished by visual processing. Object-based attention occurs when the visual system organizes the world into discrete objects, which then guide the deployment of attention. One classic signature of object-based attention is better perceptual discrimination within, rather than between, visual objects. The current study reports that object-based attention occurs not only for simple shapes but also for symbolic mathematical elements within algebraic expressions-but only among individuals who have mastered the hierarchical syntax of algebra. Moreover, among these individuals, increased object-based attention within algebraic expressions is associated with a better ability to evaluate algebraic validity. These results suggest that, in mastering the rules of algebra, people retrain their visual system to represent and evaluate abstract mathematical structure. We thus argue that algebraic expertise involves the regimentation and reuse of evolutionarily ancient perceptual processes. Our findings implicate the visual system as central to learning and reasoning in mathematics, leading us to favor educational approaches to mathematics and related STEM fields that encourage students to adapt, not abandon, their use of perception.
Cognitive considerations for helmet-mounted display design
NASA Astrophysics Data System (ADS)
Francis, Gregory; Rash, Clarence E.
2010-04-01
Helmet-mounted displays (HMDs) are designed as a tool to increase performance. To achieve this, there must be an accurate transfer of information from the HMD to the user. Ideally, an HMD would be designed to accommodate the abilities and limitations of users' cognitive processes. It is not enough for the information (whether visual, auditory, or tactual) to be displayed; the information must be perceived, attended, remembered, and organized in a way that guides appropriate decision-making, judgment, and action. Following a general overview, specific subtopics of cognition, including perception, attention, memory, knowledge, decision-making, and problem solving are explored within the context of HMDs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cottam, Joseph A.; Blaha, Leslie M.
Systems have biases. Their interfaces naturally guide a user toward specific patterns of action. For example, modern word-processors and spreadsheets are both capable of taking word wrapping, checking spelling, storing tables, and calculating formulas. You could write a paper in a spreadsheet or could do simple business modeling in a word-processor. However, their interfaces naturally communicate which function they are designed for. Visual analytic interfaces also have biases. In this paper, we outline why simple Markov models are a plausible tool for investigating that bias and how they might be applied. We also discuss some anticipated difficulties in such modelingmore » and touch briefly on what some Markov model extensions might provide.« less
Ecological Modeling Guide for Ecosystem Restoration and Management
2012-08-01
may result from proposed restoration and management actions. This report provides information to guide environmental planers in selection, development...actions. This report provides information to guide environmental planers in selection, development, evaluation and documentation of ecological models. A
Li, Wenxun; Matin, Leonard
2005-03-01
Measurements were made of the accuracy of open-loop manual pointing and height-matching to a visual target whose elevation was perceptually mislocalized. Accuracy increased linearly with distance of the hand from the body, approaching complete accuracy at full extension; with the hand close to the body (within the midfrontal plane), the manual errors equaled the magnitude of the perceptual mislocalization. The visual inducing stimulus responsible for the perceptual errors was a single pitched-from-vertical line that was long (50 degrees), eccentrically-located (25 degrees horizontal), and viewed in otherwise total darkness. The line induced perceptual errors in the elevation of a small, circular visual target set to appear at eye level (VPEL), a setting that changed linearly with the change in the line's visual pitch as has been previously reported (pitch: -30 degrees topbackward to 30 degrees topforward); the elevation errors measured by VPEL settings varied systematically with pitch through an 18 degrees range. In a fourth experiment the visual inducing stimulus responsible for the perceptual errors was shown to induce separately-measured errors in the manual setting of the arm to feel horizontal that were also distance-dependent. The distance-dependence of the visually-induced changes in felt arm position accounts quantitatively for the distance-dependence of the manual errors in pointing/reaching and height matching to the visual target: The near equality of the changes in felt horizontal and changes in pointing/reaching with the finger at the end of the fully extended arm is responsible for the manual accuracy of the fully-extended point; with the finger in the midfrontal plane their large difference is responsible for the inaccuracies of the midfrontal-plane point. The results are inconsistent with the widely-held but controversial theory that visual spatial information employed for perception and action are dissociated and different with no illusory visual influence on action. A different two-system theory, the Proximal/Distal model, employing the same signals from vision and from the body-referenced mechanism with different weights for different hand-to-body distances, accounts for both the perceptual and the manual results in the present experiments.
de Bruin, Natalie; Bryant, Devon C.; Gonzalez, Claudia L. R.
2014-01-01
Hemispatial neglect is a common outcome of stroke that is characterized by the inability to orient toward, and attend to stimuli in contralesional space. It is established that hemispatial neglect has a perceptual component, however, the presence and severity of motor impairments is controversial. Establishing the nature of space use and spatial biases during visually guided actions amongst healthy individuals is critical to understanding the presence of visuomotor deficits in patients with neglect. Accordingly, three experiments were conducted to investigate the effect of object spatial location on patterns of grasping. Experiment 1 required right-handed participants to reach and grasp for blocks in order to construct 3D models. The blocks were scattered on a tabletop divided into equal size quadrants: left near, left far, right near, and right far. Identical sets of building blocks were available in each quadrant. Space use was dynamic, with participants initially grasping blocks from right near space and tending to “neglect” left far space until the final stages of the task. Experiment 2 repeated the protocol with left-handed participants. Remarkably, left-handed participants displayed a similar pattern of space use to right-handed participants. In Experiment 3 eye movements were examined to investigate whether “neglect” for grasping in left far reachable space had its origins in attentional biases. It was found that patterns of eye movements mirrored patterns of reach-to-grasp movements. We conclude that there are spatial biases during visually guided grasping, specifically, a tendency to neglect left far reachable space, and that this “neglect” is attentional in origin. The results raise the possibility that visuomotor impairments reported among patients with right hemisphere lesions when working in contralesional space may result in part from this inherent tendency to “neglect” left far space irrespective of the presence of unilateral visuospatial neglect. PMID:24478751
Visually Impaired: Curriculum Guide.
ERIC Educational Resources Information Center
Alberta Dept. of Education, Edmonton.
The curriculum guide provides guidelines for developing academic and living vocational skills in visually handicapped students from preschool to adolescence. The document, divided into two sections, outlines objectives, teaching strategies, and materials for each skill area. Section 1 covers the following academic skills: communication,…
Baldassarre, Gianluca; Mannella, Francesco; Fiore, Vincenzo G; Redgrave, Peter; Gurney, Kevin; Mirolli, Marco
2013-05-01
Reinforcement (trial-and-error) learning in animals is driven by a multitude of processes. Most animals have evolved several sophisticated systems of 'extrinsic motivations' (EMs) that guide them to acquire behaviours allowing them to maintain their bodies, defend against threat, and reproduce. Animals have also evolved various systems of 'intrinsic motivations' (IMs) that allow them to acquire actions in the absence of extrinsic rewards. These actions are used later to pursue such rewards when they become available. Intrinsic motivations have been studied in Psychology for many decades and their biological substrates are now being elucidated by neuroscientists. In the last two decades, investigators in computational modelling, robotics and machine learning have proposed various mechanisms that capture certain aspects of IMs. However, we still lack models of IMs that attempt to integrate all key aspects of intrinsically motivated learning and behaviour while taking into account the relevant neurobiological constraints. This paper proposes a bio-constrained system-level model that contributes a major step towards this integration. The model focusses on three processes related to IMs and on the neural mechanisms underlying them: (a) the acquisition of action-outcome associations (internal models of the agent-environment interaction) driven by phasic dopamine signals caused by sudden, unexpected changes in the environment; (b) the transient focussing of visual gaze and actions on salient portions of the environment; (c) the subsequent recall of actions to pursue extrinsic rewards based on goal-directed reactivation of the representations of their outcomes. The tests of the model, including a series of selective lesions, show how the focussing processes lead to a faster learning of action-outcome associations, and how these associations can be recruited for accomplishing goal-directed behaviours. The model, together with the background knowledge reviewed in the paper, represents a framework that can be used to guide the design and interpretation of empirical experiments on IMs, and to computationally validate and further develop theories on them. Copyright © 2012 Elsevier Ltd. All rights reserved.
Do Visual Illusions Probe the Visual Brain?: Illusions in Action without a Dorsal Visual Stream
ERIC Educational Resources Information Center
Coello, Yann; Danckert, James; Blangero, Annabelle; Rossetti, Yves
2007-01-01
Visual illusions have been shown to affect perceptual judgements more so than motor behaviour, which was interpreted as evidence for a functional division of labour within the visual system. The dominant perception-action theory argues that perception involves a holistic processing of visual objects or scenes, performed within the ventral,…
Grasping with the eyes of your hands: hapsis and vision modulate hand preference.
Stone, Kayla D; Gonzalez, Claudia L R
2014-02-01
Right-hand preference has been demonstrated for visually guided reaching and grasping. Grasping, however, requires the integration of both visual and haptic cues. To what extent does vision influence hand preference for grasping? Is there a hand preference for haptically guided grasping? Two experiments were designed to address these questions. In Experiment 1, individuals were tested in a reaching-to-grasp task with vision (sighted condition) and with hapsis (blindfolded condition). Participants were asked to put together 3D models using building blocks scattered on a tabletop. The models were simple, composed of ten blocks of three different shapes. Starting condition (Vision-First or Hapsis-First) was counterbalanced among participants. Right-hand preference was greater in visually guided grasping but only in the Vision-First group. Participants who initially built the models while blindfolded (Hapsis-First group) used their right hand significantly less for the visually guided portion of the task. To investigate whether grasping using hapsis modifies subsequent hand preference, participants received an additional haptic experience in a follow-up experiment. While blindfolded, participants manipulated the blocks in a container for 5 min prior to the task. This additional experience did not affect right-hand use on visually guided grasping but had a robust effect on haptically guided grasping. Together, the results demonstrate first that hand preference for grasping is influenced by both vision and hapsis, and second, they highlight how flexible this preference could be when modulated by hapsis.
Cognitive-motor integration deficits in young adult athletes following concussion.
Brown, Jeffrey A; Dalecki, Marc; Hughes, Cindy; Macpherson, Alison K; Sergio, Lauren E
2015-01-01
The ability to perform visually-guided motor tasks requires the transformation of visual information into programmed motor outputs. When the guiding visual information does not align spatially with the motor output, the brain processes rules to integrate the information for an appropriate motor response. Here, we look at how performance on such tasks is affected in young adult athletes with concussion history. Participants displaced a cursor from a central to peripheral targets on a vertical display by sliding their finger along a touch sensitive screen in one of two spatial planes. The addition of a memory component, along with variations in cursor feedback increased task complexity across conditions. Significant main effects between participants with concussion history and healthy controls without concussion history were observed in timing and accuracy measures. Importantly, the deficits were distinctly more pronounced for participants with concussion history compared to healthy controls, especially when the brain had to control movements having two levels of decoupling between vision and action. A discriminant analysis correctly classified athletes with a history of concussion based on task performance with an accuracy of 94 %, despite the majority of these athletes being rated asymptomatic by current standards. These findings correspond to our previous work with adults at risk of developing dementia, and support the use of cognitive motor integration as an enhanced assessment tool for those who may have mild brain dysfunction. Such a task may provide a more sensitive metric of performance relevant to daily function than what is currently in use, to assist in return to play/work/learn decisions.
Intensive video gaming improves encoding speed to visual short-term memory in young male adults.
Wilms, Inge L; Petersen, Anders; Vangkilde, Signe
2013-01-01
The purpose of this study was to measure the effect of action video gaming on central elements of visual attention using Bundesen's (1990) Theory of Visual Attention. To examine the cognitive impact of action video gaming, we tested basic functions of visual attention in 42 young male adults. Participants were divided into three groups depending on the amount of time spent playing action video games: non-players (<2h/month, N=12), casual players (4-8h/month, N=10), and experienced players (>15h/month, N=20). All participants were tested in three tasks which tap central functions of visual attention and short-term memory: a test based on the Theory of Visual Attention (TVA), an enumeration test and finally the Attentional Network Test (ANT). The results show that action video gaming does not seem to impact the capacity of visual short-term memory. However, playing action video games does seem to improve the encoding speed of visual information into visual short-term memory and the improvement does seem to depend on the time devoted to gaming. This suggests that intense action video gaming improves basic attentional functioning and that this improvement generalizes into other activities. The implications of these findings for cognitive rehabilitation training are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.
Shade matching assisted by digital photography and computer software.
Schropp, Lars
2009-04-01
To evaluate the efficacy of digital photographs and graphic computer software for color matching compared to conventional visual matching. The shade of a tab from a shade guide (Vita 3D-Master Guide) placed in a phantom head was matched to a second guide of the same type by nine observers. This was done for twelve selected shade tabs (tests). The shade-matching procedure was performed visually in a simulated clinic environment and with digital photographs, and the time spent for both procedures was recorded. An alternative arrangement of the shade tabs was used in the digital photographs. In addition, a graphic software program was used for color analysis. Hue, chroma, and lightness values of the test tab and all tabs of the second guide were derived from the digital photographs. According to the CIE L*C*h* color system, the color differences between the test tab and tabs of the second guide were calculated. The shade guide tab that deviated least from the test tab was determined to be the match. Shade matching performance by means of graphic software was compared with the two visual methods and tested by Chi-square tests (alpha= 0.05). Eight of twelve test tabs (67%) were matched correctly by the computer software method. This was significantly better (p < 0.02) than the performance of the visual shade matching methods conducted in the simulated clinic (32% correct match) and with photographs (28% correct match). No correlation between time consumption for the visual shade matching methods and frequency of correct match was observed. Shade matching assisted by digital photographs and computer software was significantly more reliable than by conventional visual methods.
Student participation in World Wide Web-based curriculum development of general chemistry
NASA Astrophysics Data System (ADS)
Hunter, William John Forbes
1998-12-01
This thesis describes an action research investigation of improvements to instruction in General Chemistry at Purdue University. Specifically, the study was conducted to guide continuous reform of curriculum materials delivered via the World Wide Web by involving students, instructors, and curriculum designers. The theoretical framework for this study was based upon constructivist learning theory and knowledge claims were developed using an inductive analysis procedure. This results of this study are assertions made in three domains: learning chemistry content via the World Wide Web, learning about learning via the World Wide Web, and learning about participation in an action research project. In the chemistry content domain, students were able to learn chemical concepts that utilized 3-dimensional visualizations, but not textual and graphical information delivered via the Web. In the learning via the Web domain, the use of feedback, the placement of supplementary aids, navigation, and the perception of conceptual novelty were all important to students' use of the Web. In the participation in action research domain, students learned about the complexity of curriculum. development, and valued their empowerment as part of the process.
Libby, Lisa K; Shaeffer, Eric M; Eibach, Richard P
2009-11-01
Actions do not have inherent meaning but rather can be interpreted in many ways. The interpretation a person adopts has important effects on a range of higher order cognitive processes. One dimension on which interpretations can vary is the extent to which actions are identified abstractly--in relation to broader goals, personal characteristics, or consequences--versus concretely, in terms of component processes. The present research investigated how visual perspective (own 1st-person vs. observer's 3rd-person) in action imagery is related to action identification level. A series of experiments measured and manipulated visual perspective in mental and photographic images to test the connection with action identification level. Results revealed a bidirectional causal relationship linking 3rd-person images and abstract action identifications. These findings highlight the functional role of visual imagery and have implications for understanding how perspective is involved in action perception at the social, cognitive, and neural levels. Copyright 2009 APA
Looking and touching: what extant approaches reveal about the structure of early word knowledge.
Hendrickson, Kristi; Mitsven, Samantha; Poulin-Dubois, Diane; Zesiger, Pascal; Friend, Margaret
2015-09-01
The goal of the current study is to assess the temporal dynamics of vision and action to evaluate the underlying word representations that guide infants' responses. Sixteen-month-old infants participated in a two-alternative forced-choice word-picture matching task. We conducted a moment-by-moment analysis of looking and reaching behaviors as they occurred in tandem to assess the speed with which a prompted word was processed (visual reaction time) as a function of the type of haptic response: Target, Distractor, or No Touch. Visual reaction times (visual RTs) were significantly slower during No Touches compared to Distractor and Target Touches, which were statistically indistinguishable. The finding that visual RTs were significantly faster during Distractor Touches compared to No Touches suggests that incorrect and absent haptic responses appear to index distinct knowledge states: incorrect responses are associated with partial knowledge whereas absent responses appear to reflect a true failure to map lexical items to their target referents. Further, we found that those children who were faster at processing words were also those children who exhibited better haptic performance. This research provides a methodological clarification on knowledge measured by the visual and haptic modalities and new evidence for a continuum of word knowledge in the second year of life. © 2014 The Authors Developmental Science Published by John Wiley & Sons Ltd.
Pavlidou, Anastasia; Schnitzler, Alfons; Lange, Joachim
2014-05-01
The neural correlates of action recognition have been widely studied in visual and sensorimotor areas of the human brain. However, the role of neuronal oscillations involved during the process of action recognition remains unclear. Here, we were interested in how the plausibility of an action modulates neuronal oscillations in visual and sensorimotor areas. Subjects viewed point-light displays (PLDs) of biomechanically plausible and implausible versions of the same actions. Using magnetoencephalography (MEG), we examined dynamic changes of oscillatory activity during these action recognition processes. While both actions elicited oscillatory activity in visual and sensorimotor areas in several frequency bands, a significant difference was confined to the beta-band (∼20 Hz). An increase of power for plausible actions was observed in left temporal, parieto-occipital and sensorimotor areas of the brain, in the beta-band in successive order between 1650 and 2650 msec. These distinct spatio-temporal beta-band profiles suggest that the action recognition process is modulated by the degree of biomechanical plausibility of the action, and that spectral power in the beta-band may provide a functional interaction between visual and sensorimotor areas in humans. Copyright © 2014 Elsevier Ltd. All rights reserved.
Action Research: An Educational Leader's Guide to School Improvement. Second Edition.
ERIC Educational Resources Information Center
Glanz, Jeffrey
This book, in its second edition, is intended as a practical guide to conducting action research in schools--it outlines the process of designing and reporting an action research project. Contending that action research can be used as a powerful tool that can contribute to school renewal and instructional improvement, the book defines and presents…
Commercial Art I and Commercial Art II: An Instructional Guide.
ERIC Educational Resources Information Center
Montgomery County Public Schools, Rockville, MD.
A teacher's guide for two sequential one-year commercial art courses for high school students is presented. Commercial Art I contains three units: visual communication, product design, and environmental design. Students study visual communication by analyzing advertising techniques, practicing fundamental drawing and layout techniques, creating…
Mann, David L; Abernethy, Bruce; Farrow, Damian
2010-07-01
Coupled interceptive actions are understood to be the result of neural processing-and visual information-which is distinct from that used for uncoupled perceptual responses. To examine the visual information used for action and perception, skilled cricket batters anticipated the direction of balls bowled toward them using a coupled movement (an interceptive action that preserved the natural coupling between perception and action) or an uncoupled (verbal) response, in each of four different visual blur conditions (plano, +1.00, +2.00, +3.00). Coupled responses were found to be better than uncoupled ones, with the blurring of vision found to result in different effects for the coupled and uncoupled response conditions. Low levels of visual blur did not affect coupled anticipation, a finding consistent with the comparatively poorer visual information on which online interceptive actions are proposed to rely. In contrast, some evidence was found to suggest that low levels of blur may enhance the uncoupled verbal perception of movement.
Pavan, Andrea; Boyce, Matthew; Ghin, Filippo
2016-10-01
Playing action video games enhances visual motion perception. However, there is psychophysical evidence that action video games do not improve motion sensitivity for translational global moving patterns presented in fovea. This study investigates global motion perception in action video game players and compares their performance to that of non-action video game players and non-video game players. Stimuli were random dot kinematograms presented in the parafovea. Observers discriminated the motion direction of a target random dot kinematogram presented in one of the four visual quadrants. Action video game players showed lower motion coherence thresholds than the other groups. However, when the task was performed at threshold, we did not find differences between groups in terms of distributions of reaction times. These results suggest that action video games improve visual motion sensitivity in the near periphery of the visual field, rather than speed response. © The Author(s) 2016.
Visual variability affects early verb learning.
Twomey, Katherine E; Lush, Lauren; Pearce, Ruth; Horst, Jessica S
2014-09-01
Research demonstrates that within-category visual variability facilitates noun learning; however, the effect of visual variability on verb learning is unknown. We habituated 24-month-old children to a novel verb paired with an animated star-shaped actor. Across multiple trials, children saw either a single action from an action category (identical actions condition, for example, travelling while repeatedly changing into a circle shape) or multiple actions from that action category (variable actions condition, for example, travelling while changing into a circle shape, then a square shape, then a triangle shape). Four test trials followed habituation. One paired the habituated verb with a new action from the habituated category (e.g., 'dacking' + pentagon shape) and one with a completely novel action (e.g., 'dacking' + leg movement). The others paired a new verb with a new same-category action (e.g., 'keefing' + pentagon shape), or a completely novel category action (e.g., 'keefing' + leg movement). Although all children discriminated novel verb/action pairs, children in the identical actions condition discriminated trials that included the completely novel verb, while children in the variable actions condition discriminated the out-of-category action. These data suggest that - as in noun learning - visual variability affects verb learning and children's ability to form action categories. © 2014 The British Psychological Society.
Creating Visuals for TV; A Guide for Educators.
ERIC Educational Resources Information Center
Spear, James
There are countless ways educators can improve the quality of their educational television offerings. The Guide, planned especially for the television teacher or audiovisual director, particularly those approaching the television medium for the first time, is designed to acquaint the reader with production techniques for effective visuals to…
City: Images of America. Elementary Version.
ERIC Educational Resources Information Center
Franklin, Edward; And Others
Designed to accompany an audiovisual filmstrip series devoted to presenting a visual history of life in America, this guide contains an elementary social studies (grades 2-6) unit on the American city over the last century. Using authentic visuals including paintings, posters, advertising, documentary photography, and cartoons, the guide offers…
Learning to Verbally & Visually Communicate the Metalworking Way.
ERIC Educational Resources Information Center
California State Dept. of Education, Sacramento. Div. of Vocational Education.
This curriculum guide, one of 15 volumes written for field test use with educationally disadvantaged industrial education students needing additional instruction in the basic skill areas, deals with helping students develop basic verbal and visual communication skills while studying metalworking. Addressed in the individual units of the guide are…
Coronary angioscopy: a monorail angioscope with movable guide wire.
Nanto, S; Ohara, T; Mishima, M; Hirayama, A; Komamura, K; Matsumura, Y; Kodama, K
1991-03-01
A new angioscope was devised for easier visualization of the coronary artery. In its tip, the angioscope (Olympus) with an outer diameter of 0.8 mm had a metal lumen, through which a 0.014-in steerable guide wire passed. Using a 8F guiding catheter and a guide wire, it was introduced into the distal coronary artery. With injection of warmed saline through the guiding catheter, the coronary segments were visualized. In the attempted 70 vessels (32 left anterior descending [LAD], 10 right coronary [RCA], 28 left circumflex [LCX]) from 48 patients, 60 vessels (86%) were successfully examined. Twenty-two patients who underwent attempted examination of both LAD and LCX; both coronary arteries were visualized in 19 patients (86%). In the proximal site of the lesion, 40 patients have the diagonal branch or the obtuse marginal branch. In 34 patients (85%) the angioscope was inserted beyond these branches. In 12 very tortuous vessels, eight vessels (67%) were examined. In conclusion, the new monorail coronary angioscope with movable guide wire is useful to examine the stenotic lesions of the coronary artery.
Intraoperative positioning of the hindfoot with the hindfoot alignment guide: a pilot study.
Frigg, Arno; Jud, Lukas; Valderrabano, Victor
2014-01-01
In a previous study, intraoperative positioning of the hindfoot by visual means resulted in the wrong varus/valgus position by 8 degrees and a relatively large standard deviation of 8 degrees. Thus, new intraoperative means are needed to improve the precision of hindfoot surgery. We therefore sought a hindfoot alignment guide that would be as simple as the alignment guides used in total knee arthroplasty. A novel hindfoot alignment guide (HA guide) has been developed that projects the mechanical axis from the tibia down to the heel. The HA guide enables the positioning of the hindfoot in the desired varus/valgus position and in plantigrade position in the lateral plane. The HA guide was used intraoperatively from May through November 2011 in 11 complex patients with simultaneous correction of the supramalleolar, tibiotalar, and inframalleolar alignment. Pre- and postoperative Saltzman views were taken and the position was measured. The HA guide significantly improved the intraoperative positioning compared with visual means: The accuracy with the HA guide was 4.5 ± 5.1 degrees (mean ± standard deviation) and without the HA guide 9.4 ± 5.5 degrees (P < .05). In 7 of 11 patients, the preoperative plan was changed because of the HA guide (2 avoided osteotomies, 5 additional osteotomies). The HA guide helped to position the hindfoot intraoperatively with greater precision than visual means. The HA guide was especially useful for multilevel corrections in which the need for and the amount of a simultaneous osteotomy had to be evaluated intraoperatively. Level IV, case series.
Action Learning. A Guide for Professional, Management and Educational Development. Second Edition.
ERIC Educational Resources Information Center
McGill, Ian; Beaty, Liz
Action learning is a process of learning and reflection that happens with the support of a group of colleagues ("set") working with real problems with the intention of getting things done. This guide is for those who want to practice action learning. It can be used to introduce the concepts of action learning to others and as a manual…
Shi, Yue; Queener, Hope M.; Marsack, Jason D.; Ravikumar, Ayeswarya; Bedell, Harold E.; Applegate, Raymond A.
2013-01-01
Dynamic registration uncertainty of a wavefront-guided correction with respect to underlying wavefront error (WFE) inevitably decreases retinal image quality. A partial correction may improve average retinal image quality and visual acuity in the presence of registration uncertainties. The purpose of this paper is to (a) develop an algorithm to optimize wavefront-guided correction that improves visual acuity given registration uncertainty and (b) test the hypothesis that these corrections provide improved visual performance in the presence of these uncertainties as compared to a full-magnitude correction or a correction by Guirao, Cox, and Williams (2002). A stochastic parallel gradient descent (SPGD) algorithm was used to optimize the partial-magnitude correction for three keratoconic eyes based on measured scleral contact lens movement. Given its high correlation with logMAR acuity, the retinal image quality metric log visual Strehl was used as a predictor of visual acuity. Predicted values of visual acuity with the optimized corrections were validated by regressing measured acuity loss against predicted loss. Measured loss was obtained from normal subjects viewing acuity charts that were degraded by the residual aberrations generated by the movement of the full-magnitude correction, the correction by Guirao, and optimized SPGD correction. Partial-magnitude corrections optimized with an SPGD algorithm provide at least one line improvement of average visual acuity over the full magnitude and the correction by Guirao given the registration uncertainty. This study demonstrates that it is possible to improve the average visual acuity by optimizing wavefront-guided correction in the presence of registration uncertainty. PMID:23757512
Laby, Daniel M
2018-05-17
Despite our inability to attenuate the course of many ocular diseases that can ultimately lead to loss or significantly decreased visual function, this report describes a potential technique to aid such patients in maximizing the use of the vision that remains. The aim of this study was to demonstrate the applicability of utilizing sports vision training to improve objective and subjective visuomotor function in a low-vision patient. A 37-year-old woman with Usher syndrome presented with reduced central visual acuity and visual field. Although we were unable to reverse the damage resulting from her diagnosis, we were able to improve the use of the remaining vision. A 27 to 31% improvement in hand-eye coordination was achieved along with a 41% improvement in object tracking and visual concentration. Most importantly, following the 14-week training period, there was also a subjective improvement in the patient's appreciation of her visual ability. The sports vision literature cites many examples in which sports vision training is useful in improving visuomotor and on-field performance. We hypothesized that these techniques may be used to aid not only athletes but also patients with low vision. Despite suffering from reduced acuity and a limited visual field, these patients often still have a significant amount of vision ability that can be used to guide motor actions. Using techniques to increase the efficient use of this remaining vision may reduce the impact of the reduced visual function and aid in activities of daily living.
Visual cortex activation in kinesthetic guidance of reaching.
Darling, W G; Seitz, R J; Peltier, S; Tellmann, L; Butler, A J
2007-06-01
The purpose of this research was to determine the cortical circuit involved in encoding and controlling kinesthetically guided reaching movements. We used (15)O-butanol positron emission tomography in ten blindfolded able-bodied volunteers in a factorial experiment in which arm (left/right) used to encode target location and to reach back to the remembered location and hemispace of target location (left/right side of midsagittal plane) varied systematically. During encoding of a target the experimenter guided the hand to touch the index fingertip to an external target and then returned the hand to the start location. After a short delay the subject voluntarily moved the same hand back to the remembered target location. SPM99 analysis of the PET data contrasting left versus right hand reaching showed increased (P < 0.05, corrected) neural activity in the sensorimotor cortex, premotor cortex and posterior parietal lobule (PPL) contralateral to the moving hand. Additional neural activation was observed in prefrontal cortex and visual association areas of occipital and parietal lobes contralateral and ipsilateral to the reaching hand. There was no statistically significant effect of target location in left versus right hemispace nor was there an interaction of hand and hemispace effects. Structural equation modeling showed that parietal lobe visual association areas contributed to kinesthetic processing by both hands but occipital lobe visual areas contributed only during dominant hand kinesthetic processing. This visual processing may also involve visualization of kinesthetically guided target location and use of the same network employed to guide reaches to visual targets when reaching to kinesthetic targets. The present work clearly demonstrates a network for kinesthetic processing that includes higher visual processing areas in the PPL for both upper limbs and processing in occipital lobe visual areas for the dominant limb.
Shaping Attention with Reward: Effects of Reward on Space- and Object-Based Selection
Shomstein, Sarah; Johnson, Jacoba
2014-01-01
The contribution of rewarded actions to automatic attentional selection remains obscure. We hypothesized that some forms of automatic orienting, such as object-based selection, can be completely abandoned in lieu of reward maximizing strategy. While presenting identical visual stimuli to the observer, in a set of two experiments, we manipulate what is being rewarded (different object targets or random object locations) and the type of reward received (money or points). It was observed that reward alone guides attentional selection, entirely predicting behavior. These results suggest that guidance of selective attention, while automatic, is flexible and can be adjusted in accordance with external non-sensory reward-based factors. PMID:24121412
Vukich, John A
2009-07-01
To describe the role played by the International Medical Advisory Board (IMAB) in clinical and corporate governance at Optical Express, a corporate provider of refractive surgery. A review of goals, objectives, and actions of the IMAB. The IMAB has contributed to study design, data analysis, and selection of instruments and procedures. Through interactions with Optical Express corporate and clinical staff, the IMAB has supported management's effort to craft a corporate culture focused on continuous improvement in the safety and visual outcomes of refractive surgery. The IMAB has fashioned significant changes in corporate policies and procedures and has had an impact on corporate culture at Optical Express.
Visual context modulates potentiation of grasp types during semantic object categorization.
Kalénine, Solène; Shapiro, Allison D; Flumini, Andrea; Borghi, Anna M; Buxbaum, Laurel J
2014-06-01
Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use-compatible, as compared with move-compatible, contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions.
Visual context modulates potentiation of grasp types during semantic object categorization
Kalénine, Solène; Shapiro, Allison D.; Flumini, Andrea; Borghi, Anna M.; Buxbaum, Laurel J.
2013-01-01
Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it, but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use- compared to move-compatible contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions. PMID:24186270
How does visual manipulation affect obstacle avoidance strategies used by athletes?
Bijman, M P; Fisher, J J; Vallis, L A
2016-01-01
Research examining our ability to avoid obstacles in our path has stressed the importance of visual input. The aim of this study was to determine if athletes playing varsity-level field sports, who rely on visual input to guide motor behaviour, are more able to guide their foot over obstacles compared to recreational individuals. While wearing kinematic markers, eight varsity athletes and eight age-matched controls (aged 18-25) walked along a walkway and stepped over stationary obstacles (180° motion arc). Visual input was manipulated using PLATO visual goggles three or two steps pre-obstacle crossing and compared to trials where vision was given throughout. A main effect between groups for peak trail toe elevation was shown with greater values generated by the controls for all crossing conditions during full vision trials only. This may be interpreted as athletes not perceiving this obstacle as an increased threat to their postural stability. Collectively, findings suggest the athletic group is able to transfer their abilities to non-specific conditions during full vision trials; however, varsity-level athletes were equally reliant on visual cues for these visually guided stepping tasks as their performance was similar to the controls when vision is removed.
What and where information in the caudate tail guides saccades to visual objects
Yamamoto, Shinya; Monosov, Ilya E.; Yasuda, Masaharu; Hikosaka, Okihide
2012-01-01
We understand the world by making saccadic eye movements to various objects. However, it is unclear how a saccade can be aimed at a particular object, because two kinds of visual information, what the object is and where it is, are processed separately in the dorsal and ventral visual cortical pathways. Here we provide evidence suggesting that a basal ganglia circuit through the tail of the monkey caudate nucleus (CDt) guides such object-directed saccades. First, many CDt neurons responded to visual objects depending on where and what the objects were. Second, electrical stimulation in the CDt induced saccades whose directions matched the preferred directions of neurons at the stimulation site. Third, many CDt neurons increased their activity before saccades directed to the neurons’ preferred objects and directions in a free-viewing condition. Our results suggest that CDt neurons receive both ‘what’ and ‘where’ information and guide saccades to visual objects. PMID:22875934
ERIC Educational Resources Information Center
Rossetto, Marietta; Chiera-Macchia, Antonella
2011-01-01
This study investigated the use of comics (Cary, 2004) in a guided writing experience in secondary school Italian language learning. The main focus of the peer group interaction task included the exploration of visual sequencing and visual integration (Bailey, O'Grady-Jones, & McGown, 1995) using image and text to create a comic strip narrative in…
Chen, Juan; Sperandio, Irene; Goodale, Melvyn Alan
2015-01-01
Objects rarely appear in isolation in natural scenes. Although many studies have investigated how nearby objects influence perception in cluttered scenes (i.e., crowding), none has studied how nearby objects influence visually guided action. In Experiment 1, we found that participants could scale their grasp to the size of a crowded target even when they could not perceive its size, demonstrating for the first time that neurologically intact participants can use visual information that is not available to conscious report to scale their grasp to real objects in real scenes. In Experiments 2 and 3, we found that changing the eccentricity of the display and the orientation of the flankers had no effect on grasping but strongly affected perception. The differential effects of eccentricity and flanker orientation on perception and grasping show that the known differences in retinotopy between the ventral and dorsal streams are reflected in the way in which people deal with targets in cluttered scenes. © The Author(s) 2014.
Safety Action; Traffic and Pedestrian Safety. A Guide for Teachers in the Elementary Schools.
ERIC Educational Resources Information Center
Department of Transportation, Washington, DC.
GRADES OR AGES: Elementary, grades 1-6. SUBJECT MATTER: Safety action, traffic and pedestrian safety. ORGANIZATION AND PHYSICAL APPEARANCE: After introductory material explaining the philosophy of the guide, the elementary school child, characteristics of children as related to safety, and the responsibility of the safety team, the guide has…
Wilderness and backcountry site restoration guide
Lisa Therrell; David Cole; Victor Claassen; Chris Ryan; Mary Ann Davies
2006-01-01
This comprehensive guide focuses on restoration of small-scale impact caused by human actions in wilderness and backcountry areas. The guide's goals are to: 1) Help practitioners develop plans that thoroughly address the question of whether site restoration is the best management action and, if so, develop a site-specific restoration plan that incorporates...
Women's Action Almanac: A Complete Resource Guide.
ERIC Educational Resources Information Center
Williamson, Jane, Ed.; And Others
Designed to provide answers to questions on women's issues and programs, the guide is arranged into two parts. Part 1, which comprises about three-fourths of the guide, contains background information and answers to often asked questions on 84 issues, such as abortion, affirmative action, battered women, divorce, incest, and insurance. Each entry…
Cerebral activations related to writing and drawing with each hand.
Potgieser, Adriaan R E; van der Hoorn, Anouk; de Jong, Bauke M
2015-01-01
Writing is a sequential motor action based on sensorimotor integration in visuospatial and linguistic functional domains. To test the hypothesis of lateralized circuitry concerning spatial and language components involved in such action, we employed an fMRI paradigm including writing and drawing with each hand. In this way, writing-related contributions of dorsal and ventral premotor regions in each hemisphere were assessed, together with effects in wider distributed circuitry. Given a right-hemisphere dominance for spatial action, right dorsal premotor cortex dominance was expected in left-hand writing while dominance of the left ventral premotor cortex was expected during right-hand writing. Sixteen healthy right-handed subjects were scanned during audition-guided writing of short sentences and simple figure drawing without visual feedback. Tapping with a pencil served as a basic control task for the two higher-order motor conditions. Activation differences were assessed with Statistical Parametric Mapping (SPM). Writing and drawing showed parietal-premotor and posterior inferior temporal activations in both hemispheres when compared to tapping. Drawing activations were rather symmetrical for each hand. Activations in left- and right-hand writing were left-hemisphere dominant, while right dorsal premotor activation only occurred in left-hand writing, supporting a spatial motor contribution of particularly the right hemisphere. Writing contrasted to drawing revealed left-sided activations in the dorsal and ventral premotor cortex, Broca's area, pre-Supplementary Motor Area and posterior middle and inferior temporal gyri, without parietal activation. The audition-driven postero-inferior temporal activations indicated retrieval of virtual visual form characteristics in writing and drawing, with additional activation concerning word form in the left hemisphere. Similar parietal processing in writing and drawing pointed at a common mechanism by which such visually formatted information is used for subsequent sensorimotor integration along a dorsal visuomotor pathway. In this, the left posterior middle temporal gyrus subserves phonological-orthographical conversion, dissociating dorsal parietal-premotor circuitry from perisylvian circuitry including Broca's area.
Cerebral Activations Related to Writing and Drawing with Each Hand
Potgieser, Adriaan R. E.; van der Hoorn, Anouk; de Jong, Bauke M.
2015-01-01
Background Writing is a sequential motor action based on sensorimotor integration in visuospatial and linguistic functional domains. To test the hypothesis of lateralized circuitry concerning spatial and language components involved in such action, we employed an fMRI paradigm including writing and drawing with each hand. In this way, writing-related contributions of dorsal and ventral premotor regions in each hemisphere were assessed, together with effects in wider distributed circuitry. Given a right-hemisphere dominance for spatial action, right dorsal premotor cortex dominance was expected in left-hand writing while dominance of the left ventral premotor cortex was expected during right-hand writing. Methods Sixteen healthy right-handed subjects were scanned during audition-guided writing of short sentences and simple figure drawing without visual feedback. Tapping with a pencil served as a basic control task for the two higher-order motor conditions. Activation differences were assessed with Statistical Parametric Mapping (SPM). Results Writing and drawing showed parietal-premotor and posterior inferior temporal activations in both hemispheres when compared to tapping. Drawing activations were rather symmetrical for each hand. Activations in left- and right-hand writing were left-hemisphere dominant, while right dorsal premotor activation only occurred in left-hand writing, supporting a spatial motor contribution of particularly the right hemisphere. Writing contrasted to drawing revealed left-sided activations in the dorsal and ventral premotor cortex, Broca’s area, pre-Supplementary Motor Area and posterior middle and inferior temporal gyri, without parietal activation. Discussion The audition-driven postero-inferior temporal activations indicated retrieval of virtual visual form characteristics in writing and drawing, with additional activation concerning word form in the left hemisphere. Similar parietal processing in writing and drawing pointed at a common mechanism by which such visually formatted information is used for subsequent sensorimotor integration along a dorsal visuomotor pathway. In this, the left posterior middle temporal gyrus subserves phonological-orthographical conversion, dissociating dorsal parietal-premotor circuitry from perisylvian circuitry including Broca's area. PMID:25955655
ERIC Educational Resources Information Center
Umansky, Warren; And Others
The guide offers a means for evaluating specific learning characteristics of visually impaired children at three levels: prereadiness (prekindergarten), readiness (kindergarten), and academic (primary grades). Items are designed to be administered by informal observation and structured testing. Score sheets contain space for reporting two testing…
Food: Images of America. Social Studies Unit, Elementary Grades 2-6.
ERIC Educational Resources Information Center
Franklin, Edward; And Others
Designed to accompany an audiovisual filmstrip series devoted to presenting a visual history of life in America, this guide contains an elementary school (grades 2-6) unit on American food over the last century. Using authentic visuals including paintings, advertising, label art, documentary photography, and a movie still, the guide offers…
An Annotated Guide to Audio-Visual Materials for Teaching Shakespeare.
ERIC Educational Resources Information Center
Albert, Richard N.
Audio-visual materials, found in a variety of periodicals, catalogs, and reference works, are listed in this guide to expedite the process of finding appropriate classroom materials for a study of William Shakespeare in the classroom. Separate listings of films, filmstrips, and recordings are provided, with subdivisions for "The Plays"…
The Computer: An Art Tool for the Visually Gifted. A Curriculum Guide.
ERIC Educational Resources Information Center
Suter, Thomas E.; Bibbey, Melissa R.
This curriculum guide, developed and used in Wheelersburg (Ohio) with visually talented students, shows how such students can be taught to utilize computers as an art medium and tool. An initial section covers program implementation including setup, class structure and scheduling, teaching strategies, and housecleaning and maintenance. Seventeen…
Sáles, Christopher S; Manche, Edward E
2014-01-01
Background To compare wavefront (WF)-guided and WF-optimized laser in situ keratomileusis (LASIK) in hyperopes with respect to the parameters of safety, efficacy, predictability, refractive error, uncorrected distance visual acuity, corrected distance visual acuity, contrast sensitivity, and higher order aberrations. Methods Twenty-two eyes of eleven participants with hyperopia with or without astigmatism were prospectively randomized to receive WF-guided LASIK with the VISX CustomVue S4 IR or WF-optimized LASIK with the WaveLight Allegretto Eye-Q 400 Hz. LASIK flaps were created using the 150-kHz IntraLase iFS. Evaluations included measurement of uncorrected distance visual acuity, corrected distance visual acuity, <5% and <25% contrast sensitivity, and WF aberrometry. Patients also completed a questionnaire detailing symptoms on a quantitative grading scale. Results There were no statistically significant differences between the groups for any of the variables studied after 12 months of follow-up (all P>0.05). Conclusion This comparative case series of 11 subjects with hyperopia showed that WF-guided and WF-optimized LASIK had similar clinical outcomes at 12 months. PMID:25419115
Chen, Kai-Hsiang; Lin, Po-Chieh; Yang, Bing-Shiang; Chen, Yu-Jung
2018-06-01
In a spiral task, the accuracy of the spiral trajectory, which is affected by tracing or tracking ability, differs between patients with Parkinson's disease (PD) and essential tremor (ET). However, not many studies have analyzed velocity differences between the groups during this task. This study aimed to examine differences between the groups related to this characteristic using a tablet. Fourteen PD, 12 ET, and 12 control group participants performed two tasks: tracing a given spiral (T1) and following a guiding point (T2). A digitized tablet was used to record movements and trajectory. Effects of direct visual feedback on intergroup and intragroup velocity were measured. Although PD patients had a significantly lower T1 velocity than the control group (p < 0.05), they could match the velocity of the guiding point (3.0 cm/s) in T2. There was no significant difference in the average T1 velocity between ET and the control groups (p = 0.26); however, the T2 velocity of ET patients was significantly higher than the control group (p < 0.05). They were also unable to adjust the velocity to match the guiding point, indicating that ET patients have a poorer ability to follow dynamic guidance. When both groups of patients have similar action tremor severity, their ability to follow dynamic guidance was still significantly different. Our study combined visual feedback with spiral drawing and demonstrated differences in the following-velocity distribution in PD and ET. This method may be used to distinguish the tremor presentation of both diseases, and thus, provide accurate diagnosis.
ERIC Educational Resources Information Center
Coelho, Chase J.; Nusbaum, Howard C.; Rosenbaum, David A.; Fenn, Kimberly M.
2012-01-01
Early research on visual imagery led investigators to suggest that mental visual images are just weak versions of visual percepts. Later research helped investigators understand that mental visual images differ in deeper and more subtle ways from visual percepts. Research on motor imagery has yet to reach this mature state, however. Many authors…
Eye movements in interception with delayed visual feedback.
Cámara, Clara; de la Malla, Cristina; López-Moliner, Joan; Brenner, Eli
2018-07-01
The increased reliance on electronic devices such as smartphones in our everyday life exposes us to various delays between our actions and their consequences. Whereas it is known that people can adapt to such delays, the mechanisms underlying such adaptation remain unclear. To better understand these mechanisms, the current study explored the role of eye movements in interception with delayed visual feedback. In two experiments, eye movements were recorded as participants tried to intercept a moving target with their unseen finger while receiving delayed visual feedback about their own movement. In Experiment 1, the target randomly moved in one of two different directions at one of two different velocities. The delay between the participant's finger movement and movement of the cursor that provided feedback about the finger movements was gradually increased. Despite the delay, participants followed the target with their gaze. They were quite successful at hitting the target with the cursor. Thus, they moved their finger to a position that was ahead of where they were looking. Removing the feedback showed that participants had adapted to the delay. In Experiment 2, the target always moved in the same direction and at the same velocity, while the cursor's delay varied across trials. Participants still always directed their gaze at the target. They adjusted their movement to the delay on each trial, often succeeding to intercept the target with the cursor. Since their gaze was always directed at the target, and they could not know the delay until the cursor started moving, participants must have been using peripheral vision of the delayed cursor to guide it to the target. Thus, people deal with delays by directing their gaze at the target and using both experience from previous trials (Experiment 1) and peripheral visual information (Experiment 2) to guide their finger in a way that will make the cursor hit the target.
Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition
Shu, Na; Gao, Zhiyong; Chen, Xiangan; Liu, Haihua
2015-01-01
Humans can easily understand other people’s actions through visual systems, while computers cannot. Therefore, a new bio-inspired computational model is proposed in this paper aiming for automatic action recognition. The model focuses on dynamic properties of neurons and neural networks in the primary visual cortex (V1), and simulates the procedure of information processing in V1, which consists of visual perception, visual attention and representation of human action. In our model, a family of the three-dimensional spatial-temporal correlative Gabor filters is used to model the dynamic properties of the classical receptive field of V1 simple cell tuned to different speeds and orientations in time for detection of spatiotemporal information from video sequences. Based on the inhibitory effect of stimuli outside the classical receptive field caused by lateral connections of spiking neuron networks in V1, we propose surround suppressive operator to further process spatiotemporal information. Visual attention model based on perceptual grouping is integrated into our model to filter and group different regions. Moreover, in order to represent the human action, we consider the characteristic of the neural code: mean motion map based on analysis of spike trains generated by spiking neurons. The experimental evaluation on some publicly available action datasets and comparison with the state-of-the-art approaches demonstrate the superior performance of the proposed model. PMID:26132270
The Shape We're In: Community Action Guide.
ERIC Educational Resources Information Center
2003
"The Shape We're In" is a national public education initiative that places a media spotlight on physical activity and obesity to promote public awareness and spark dialogue and action across the country. The centerpiece is a five-part newspaper series delivered to newspapers nationwide. This community action guide begins by describing…
Coordinator's Guide for Indoor Air Quality
IAQ Tools for Schools Action Kit - IAQ Coordinator's Guide. This guidance is designed to present practical and often low-cost actions you can take to identify and address existing or potential air quality problems.
A closer look at visually guided saccades in autism and Asperger’s disorder
Johnson, Beth P.; Rinehart, Nicole J.; Papadopoulos, Nicole; Tonge, Bruce; Millist, Lynette; White, Owen; Fielding, Joanne
2012-01-01
Motor impairments have been found to be a significant clinical feature associated with autism and Asperger’s disorder (AD) in addition to core symptoms of communication and social cognition deficits. Motor deficits in high-functioning autism (HFA) and AD may differentiate these disorders, particularly with respect to the role of the cerebellum in motor functioning. Current neuroimaging and behavioral evidence suggests greater disruption of the cerebellum in HFA than AD. Investigations of ocular motor functioning have previously been used in clinical populations to assess the integrity of the cerebellar networks, through examination of saccade accuracy and the integrity of saccade dynamics. Previous investigations of visually guided saccades in HFA and AD have only assessed basic saccade metrics, such as latency, amplitude, and gain, as well as peak velocity. We used a simple visually guided saccade paradigm to further characterize the profile of visually guided saccade metrics and dynamics in HFA and AD. It was found that children with HFA, but not AD, were more inaccurate across both small (5°) and large (10°) target amplitudes, and final eye position was hypometric at 10°. These findings suggest greater functional disturbance of the cerebellum in HFA than AD, and suggest fundamental difficulties with visual error monitoring in HFA. PMID:23162442
Visually Guided Control of Movement
NASA Technical Reports Server (NTRS)
Johnson, Walter W. (Editor); Kaiser, Mary K. (Editor)
1991-01-01
The papers given at an intensive, three-week workshop on visually guided control of movement are presented. The participants were researchers from academia, industry, and government, with backgrounds in visual perception, control theory, and rotorcraft operations. The papers included invited lectures and preliminary reports of research initiated during the workshop. Three major topics are addressed: extraction of environmental structure from motion; perception and control of self motion; and spatial orientation. Each topic is considered from both theoretical and applied perspectives. Implications for control and display are suggested.
ERIC Educational Resources Information Center
Chewonki Foundation, Wiscasset, ME.
This action guide is designed to help students and teachers become aware of the concepts and issues of waste management, and to motivate them to action in the classroom, school, home, and community. The guide emphasizes interdisciplinary activities that concentrate on the process of problem solving. Activities are identified by appropriate grade…
Visual Cues Generated during Action Facilitate 14-Month-Old Infants' Mental Rotation
ERIC Educational Resources Information Center
Antrilli, Nick K.; Wang, Su-hua
2016-01-01
Although action experience has been shown to enhance the development of spatial cognition, the mechanism underlying the effects of action is still unclear. The present research examined the role of visual cues generated during action in promoting infants' mental rotation. We sought to clarify the underlying mechanism by decoupling different…
Titiyal, Jeewan S; Kaur, Manpreet; Jose, Cijin P; Falera, Ruchita; Kinkar, Ashutosh; Bageshwar, Lalit Ms
2018-01-01
To compare toric intraocular lens (IOL) alignment assisted by image-guided surgery or manual marking methods and its impact on visual quality. This prospective comparative study enrolled 80 eyes with cataract and astigmatism ≥1.5 D to undergo phacoemulsification with toric IOL alignment by manual marking method using bubble marker (group I, n=40) or Callisto eye and Z align (group II, n=40). Postoperatively, accuracy of alignment and visual quality was assessed with a ray tracing aberrometer. Primary outcome measure was deviation from the target axis of implantation. Secondary outcome measures were visual quality and acuity. Follow-up was performed on postoperative days (PODs) 1 and 30. Deviation from the target axis of implantation was significantly less in group II on PODs 1 and 30 (group I: 5.5°±3.3°, group II: 3.6°±2.6°; p =0.005). Postoperative refractive cylinder was -0.89±0.35 D in group I and -0.64±0.36 D in group II ( p =0.003). Visual acuity was comparable between both the groups. Visual quality measured in terms of Strehl ratio ( p <0.05) and modulation transfer function (MTF) ( p <0.05) was significantly better in the image-guided surgery group. Significant negative correlation was observed between deviation from target axis and visual quality parameters (Strehl ratio and MTF) ( p <0.05). Image-guided surgery allows precise alignment of toric IOL without need for reference marking. It is associated with superior visual quality which correlates with the precision of IOL alignment.
Titiyal, Jeewan S; Kaur, Manpreet; Jose, Cijin P; Falera, Ruchita; Kinkar, Ashutosh; Bageshwar, Lalit MS
2018-01-01
Purpose To compare toric intraocular lens (IOL) alignment assisted by image-guided surgery or manual marking methods and its impact on visual quality. Patients and methods This prospective comparative study enrolled 80 eyes with cataract and astigmatism ≥1.5 D to undergo phacoemulsification with toric IOL alignment by manual marking method using bubble marker (group I, n=40) or Callisto eye and Z align (group II, n=40). Postoperatively, accuracy of alignment and visual quality was assessed with a ray tracing aberrometer. Primary outcome measure was deviation from the target axis of implantation. Secondary outcome measures were visual quality and acuity. Follow-up was performed on postoperative days (PODs) 1 and 30. Results Deviation from the target axis of implantation was significantly less in group II on PODs 1 and 30 (group I: 5.5°±3.3°, group II: 3.6°±2.6°; p=0.005). Postoperative refractive cylinder was −0.89±0.35 D in group I and −0.64±0.36 D in group II (p=0.003). Visual acuity was comparable between both the groups. Visual quality measured in terms of Strehl ratio (p<0.05) and modulation transfer function (MTF) (p<0.05) was significantly better in the image-guided surgery group. Significant negative correlation was observed between deviation from target axis and visual quality parameters (Strehl ratio and MTF) (p<0.05). Conclusion Image-guided surgery allows precise alignment of toric IOL without need for reference marking. It is associated with superior visual quality which correlates with the precision of IOL alignment. PMID:29731603
Neural foundations of overt and covert actions.
Simos, Panagiotis G; Kavroulakis, Eleftherios; Maris, Thomas; Papadaki, Efrosini; Boursianis, Themistoklis; Kalaitzakis, Giorgos; Savaki, Helen E
2017-05-15
We used fMRI to assess the human brain areas activated for execution, observation and 1st person motor imagery of a visually guided tracing task with the index finger. Voxel-level conjunction analysis revealed several cortical areas activated in common across all three motor conditions, namely, the upper limb representation of the primary motor and somatosensory cortices, the dorsal and ventral premotor, the superior and inferior parietal cortices as well as the posterior part of the superior and middle temporal gyrus including the temporo-parietal junction (TPj) and the extrastriate body area (EBA). Functional connectivity analyses corroborated the notion that a common sensory-motor fronto-parieto-temporal cortical network is engaged for execution, observation, and imagination of the very same action. Taken together these findings are consistent with the more parsimonious account of motor cognition provided by the mental simulation theory rather than the recently revised mirror neuron view Action imagination and observation were each associated with several additional functional connections, which may serve the distinction between overt action and its covert counterparts, and the attribution of action to the correct agent. For example, the central position of the right middle and inferior frontal gyrus in functional connectivity during motor imagery may reflect the suppression of movements during mere imagination of action, and may contribute to the distinction between 'imagined' and 'real' action. Also, the central role of the right EBA in observation, assessed by functional connectivity analysis, may be related to the attribution of action to the 'external agent' as opposed to the 'self'. Copyright © 2017 Elsevier Inc. All rights reserved.
Action generation and action perception in imitation: an instance of the ideomotor principle.
Wohlschläger, Andreas; Gattis, Merideth; Bekkering, Harold
2003-01-01
We review a series of behavioural experiments on imitation in children and adults that test the predictions of a new theory of imitation. Most of the recent theories of imitation assume a direct visual-to-motor mapping between perceived and imitated movements. Based on our findings of systematic errors in imitation, the new theory of goal-directed imitation (GOADI) instead assumes that imitation is guided by cognitively specified goals. According to GOADI, the imitator does not imitate the observed movement as a whole, but rather decomposes it into its separate aspects. These aspects are hierarchically ordered, and the highest aspect becomes the imitator's main goal. Other aspects become sub-goals. In accordance with the ideomotor principle, the main goal activates the motor programme that is most strongly associated with the achievement of that goal. When executed, this motor programme sometimes matches, and sometimes does not, the model's movement. However, the main goal extracted from the model movement is almost always imitated correctly. PMID:12689376
ARTIFICIAL LIGHTING FOR MODERN SCHOOLS, A GUIDE FOR ADMINISTRATIVE USE.
ERIC Educational Resources Information Center
REIDA, GEORGE W.; AND OTHERS
THE DEVELOPMENT OF GOOD VISUAL ENVIRONMENT AND ECONOMICALLY FEASIBLE LIGHTING INSTALLATIONS IN SCHOOLS IS DISCUSSED IN THIS GUIDE. EIGHTY PERCENT OF ALL SCHOOL LEARNING IS GAINED THROUGH THE EYES AS ESTIMATED BY THE U.S. OFFICE OF EDUCATION. GOOD SCHOOL LIGHTING IS COMFORTABLE, GLAREFREE AND ADEQUATE FOR THE VISUAL TASK. EYE STRAIN AND UNNECESSARY…
The Role of Clarity and Blur in Guiding Visual Attention in Photographs
ERIC Educational Resources Information Center
Enns, James T.; MacDonald, Sarah C.
2013-01-01
Visual artists and photographers believe that a viewer's gaze can be guided by selective use of image clarity and blur, but there is little systematic research. In this study, participants performed several eye-tracking tasks with the same naturalistic photographs, including recognition memory for the entire photo, as well as recognition memory…
Self-Study and Evaluation Guide/1979 Edition. Section D-16: Other Service Program.
ERIC Educational Resources Information Center
National Accreditation Council for Agencies Serving the Blind and Visually Handicapped, New York, NY.
The self evaluation guide is explained to be designed for accreditation of services to blind and visually handicapped students in service programs for which the NAC (National Accreditation Council for Agencies Serving the Blind and Visually Handicapped) does not have specific program standards (such as radio reading services and library services).…
ERIC Educational Resources Information Center
Byun, Tara McAllister; Hitchcock, Elaine R.; Ferron, John
2017-01-01
Purpose: Single-case experimental designs are widely used to study interventions for communication disorders. Traditionally, single-case experiments follow a response-guided approach, where design decisions during the study are based on participants' observed patterns of behavior. However, this approach has been criticized for its high rate of…
Wisconsin School for the Visually Handicapped. A Curriculum Guide for Students. Bulletin No. 7393.
ERIC Educational Resources Information Center
Wisconsin State Dept. of Public Instruction, Madison. Div. for Handicapped Children and Pupil Services.
The curriculum guide sets forth the course of study at the Wisconsin School for the Visually Handicapped. An initial section presents the school's philosophy regarding the need for specialty skills to be incorporated into regular academic instruction. The content of the primary and elementary programs (kindergarten through grade 6) is reviewed in…
K9 Buddies: A Program of Guide Dogs for the Blind
ERIC Educational Resources Information Center
Ritter, Joanne
2007-01-01
Today, exceptional dogs that have been specially bred and socialized are paired with children who are blind or visually impaired. These dogs, called "K9 Buddies," are from Guide Dogs for the Blind, a national nonprofit organization with a mission to offer skilled mobility dogs and training free-of-charge to adults with visual impairments…
Reference Guide for Indoor Air Quality in Schools
IAQ Tools for Schools Action Kit - IAQ Reference Guide. This guidance is designed to present practical and often low-cost actions you can take to identify and address existing or potential air quality problems.
Energy and Environment Guide to Action- Executive Summary
Summarizes the key messages and purpose of the Energy and Environment Guide to Action, which describes the latest best practices and opportunities that states are using to invest in energy efficiency, renewable energy, and CHP.
Let's 'play' with molecular pharmacology.
Choudhury, Supriyo; Pradhan, Richeek; Sengupta, Gairik; Das, Manisha; Chatterjee, Manojit; Roy, Ranendra Kumar; Chatterjee, Suparna
2015-01-01
Understanding concepts of molecular mechanisms of drug action involves sequential visualization of physiological processes and drug effects, a task that can be difficult at an undergraduate level. Role-play is a teaching-learning methodology whereby active participation of students as well as clear visualization of the phenomenon is used to convey complex physiological concepts. However, its use in teaching drug action, a process that demands understanding of a second level of complexity over the physiological process, has not been investigated. We hypothesized that role-play can be an effective and well accepted method for teaching molecular pharmacology. In an observational study, students were guided to perform a role-play on a selected topic involving drug activity. Students' gain in knowledge was assessed comparing validated pre- and post-test questionnaires as well as class average normalized gain. The acceptance of role-play among undergraduate medical students was evaluated by Likert scale analysis and thematic analysis of their open-ended written responses. Significant improvement in knowledge (P < 0.001) was noted in the pre- to post-test knowledge scores, while a high gain in class average normalized score was evident. In Likert scale analysis, most students (93%) expressed that role-play was an acceptable way of teaching. In a thematic analysis, themes of both strengths and weaknesses of the session emerged. Role-play can be effectively utilized while teaching selected topics of molecular pharmacology in undergraduate medical curricula.
Action starring narratives and events: Structure and inference in visual narrative comprehension
Cohn, Neil; Wittenberg, Eva
2015-01-01
Studies of discourse have long placed focus on the inference generated by information that is not overtly expressed, and theories of visual narrative comprehension similarly focused on the inference generated between juxtaposed panels. Within the visual language of comics, star-shaped “flashes” commonly signify impacts, but can be enlarged to the size of a whole panel that can omit all other representational information. These “action star” panels depict a narrative culmination (a “Peak”), but have content which readers must infer, thereby posing a challenge to theories of inference generation in visual narratives that focus only on the semantic changes between juxtaposed images. This paper shows that action stars demand more inference than depicted events, and that they are more coherent in narrative sequences than scrambled sequences (Experiment 1). In addition, action stars play a felicitous narrative role in the sequence (Experiment 2). Together, these results suggest that visual narratives use conventionalized depictions that demand the generation of inferences while retaining narrative coherence of a visual sequence. PMID:26709362
Action starring narratives and events: Structure and inference in visual narrative comprehension.
Cohn, Neil; Wittenberg, Eva
Studies of discourse have long placed focus on the inference generated by information that is not overtly expressed, and theories of visual narrative comprehension similarly focused on the inference generated between juxtaposed panels. Within the visual language of comics, star-shaped "flashes" commonly signify impacts, but can be enlarged to the size of a whole panel that can omit all other representational information. These "action star" panels depict a narrative culmination (a "Peak"), but have content which readers must infer, thereby posing a challenge to theories of inference generation in visual narratives that focus only on the semantic changes between juxtaposed images. This paper shows that action stars demand more inference than depicted events, and that they are more coherent in narrative sequences than scrambled sequences (Experiment 1). In addition, action stars play a felicitous narrative role in the sequence (Experiment 2). Together, these results suggest that visual narratives use conventionalized depictions that demand the generation of inferences while retaining narrative coherence of a visual sequence.
General visual robot controller networks via artificial evolution
NASA Astrophysics Data System (ADS)
Cliff, David; Harvey, Inman; Husbands, Philip
1993-08-01
We discuss recent results from our ongoing research concerning the application of artificial evolution techniques (i.e., an extended form of genetic algorithm) to the problem of developing `neural' network controllers for visually guided robots. The robot is a small autonomous vehicle with extremely low-resolution vision, employing visual sensors which could readily be constructed from discrete analog components. In addition to visual sensing, the robot is equipped with a small number of mechanical tactile sensors. Activity from the sensors is fed to a recurrent dynamical artificial `neural' network, which acts as the robot controller, providing signals to motors governing the robot's motion. Prior to presentation of new results, this paper summarizes our rationale and past work, which has demonstrated that visually guided control networks can arise without any explicit specification that visual processing should be employed: the evolutionary process opportunistically makes use of visual information if it is available.
Energy and Environment Guide to Action - Chapter 1: Introduction and Background
Introduces the Energy and Environment Guide to Action which documents best practices for designing and implementing state policies and the benefits of energy efficiency, renewable energy, and combined heat and power policies and programs.
Invariant recognition drives neural representations of action sequences
Poggio, Tomaso
2017-01-01
Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences. PMID:29253864
Kuntz, Jessica R; Karl, Jenni M; Doan, Jon B; Whishaw, Ian Q
2018-04-01
Reach-to-grasp movements feature the integration of a reach directed by the extrinsic (location) features of a target and a grasp directed by the intrinsic (size, shape) features of a target. The action-perception theory suggests that integration and scaling of a reach-to-grasp movement, including its trajectory and the concurrent digit shaping, are features that depend upon online action pathways of the dorsal visuomotor stream. Scaling is much less accurate for a pantomime reach-to-grasp movement, a pretend reach with the target object absent. Thus, the action-perception theory proposes that pantomime movement is mediated by perceptual pathways of the ventral visuomotor stream. A distinguishing visual feature of a real reach-to-grasp movement is gaze anchoring, in which a participant visually fixates the target throughout the reach and disengages, often by blinking or looking away/averting the head, at about the time that the target is grasped. The present study examined whether gaze anchoring is associated with pantomime reaching. The eye and hand movements of participants were recorded as they reached for a ball of one of three sizes, located on a pedestal at arms' length, or pantomimed the same reach with the ball and pedestal absent. The kinematic measures for real reach-to-grasp movements were coupled to the location and size of the target, whereas the kinematic measures for pantomime reach-to-grasp, although grossly reflecting target features, were significantly altered. Gaze anchoring was also tightly coupled to the target for real reach-to-grasp movements, but there was no systematic focus for gaze, either in relation with the virtual target, the previous location of the target, or the participant's reaching hand, for pantomime reach-to-grasp. The presence of gaze anchoring during real vs. its absence in pantomime reach-to-grasp supports the action-perception theory that real, but not pantomime, reaches are online visuomotor actions and is discussed in relation with the neural control of real and pantomime reach-to-grasp movements.
Park, George D; Reed, Catherine L
2015-02-01
Researchers acknowledge the interplay between action and attention, but typically consider action as a response to successful attentional selection or the correlation of performance on separate action and attention tasks. We investigated how concurrent action with spatial monitoring affects the distribution of attention across the visual field. We embedded a functional field of view (FFOV) paradigm with concurrent central object recognition and peripheral target localization tasks in a simulated driving environment. Peripheral targets varied across 20-60 deg eccentricity at 11 radial spokes. Three conditions assessed the effects of visual complexity and concurrent action on the size and shape of the FFOV: (1) with no background, (2) with driving background, and (3) with driving background and vehicle steering. The addition of visual complexity slowed task performance and reduced the FFOV size but did not change the baseline shape. In contrast, the addition of steering produced not only shrinkage of the FFOV, but also changes in the FFOV shape. Nonuniform performance decrements occurred in proximal regions used for the central task and for steering, independent of interference from context elements. Multifocal attention models should consider the role of action and account for nonhomogeneities in the distribution of attention. © 2015 SAGE Publications.
Two different streams form the dorsal visual system: anatomy and functions.
Rizzolatti, Giacomo; Matelli, Massimo
2003-11-01
There are two radically different views on the functional role of the dorsal visual stream. One considers it as a system involved in space perception. The other is of a system that codes visual information for action organization. On the basis of new anatomical data and a reconsideration of previous functional and clinical data, we propose that the dorsal stream and its recipient parietal areas form two distinct functional systems: the dorso-dorsal stream (d-d stream) and the ventro-dorsal stream (v-d stream). The d-d stream is formed by area V6 (main d-d extrastriate visual node) and areas V6A and MIP of the superior parietal lobule. Its major functional role is the control of actions "on line". Its damage leads to optic ataxia. The v-d stream is formed by area MT (main v-d extrastriate visual node) and by the visual areas of the inferior parietal lobule. As the d-d stream, v-d stream is responsible for action organization. It, however, also plays a crucial role in space perception and action understanding. The putative mechanisms linking action and perception in the v-d stream is discussed.
Changes in search rate but not in the dynamics of exogenous attention in action videogame players.
Hubert-Wallander, Bjorn; Green, C Shawn; Sugarman, Michael; Bavelier, Daphne
2011-11-01
Many previous studies have shown that the speed of processing in attentionally demanding tasks seems enhanced following habitual action videogame play. However, using one of the diagnostic tasks for efficiency of attentional processing, a visual search task, Castel and collaborators (Castel, Pratt, & Drummond, Acta Psychologica 119:217-230, 2005) reported no difference in visual search rates, instead proposing that action gaming may change response execution time rather than the efficiency of visual selective attention per se. Here we used two hard visual search tasks, one measuring reaction time and the other accuracy, to test whether visual search rate may be changed by action videogame play. We found greater search rates in the gamer group than in the nongamer controls, consistent with increased efficiency in visual selective attention. We then asked how general the change in attentional throughput noted so far in gamers might be by testing whether exogenous attentional cues would lead to a disproportional enhancement in throughput in gamers as compared to nongamers. Interestingly, exogenous cues were found to enhance throughput equivalently between gamers and nongamers, suggesting that not all mechanisms known to enhance throughput are similarly enhanced in action videogamers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
BIRKEL, GARRETT; GARCIA MARTIN, HECTOR; MORRELL, WILLIAM
"Arrowland" is a web-based software application primarily for mapping, integrating and visualizing a variety of metabolism data of living organisms, including but not limited to metabolomics, proteomics, transcriptomics and fluxomics. This software application makes multi-omics data analysis intuitive and interactive. It improves data sharing and communication by enabling users to visualize their omics data using a web browser (on a PC or mobile device). It increases user's productivity by simplifying multi-omics data analysis using well developed maps as a guide. Users using this tool can gain insights into their data sets that would be difficult or even impossible to teasemore » out by looking at raw number, or using their currently existing toolchains to generate static single-use maps. Arrowland helps users save time by visualizing relative changes in different conditions or over time, and helps users to produce more significant insights faster. Preexisting maps decrease the learning curve for beginners in the omics field. Sets of multi-omics data are presented in the browser, as a two-dimensional flowchart resembling a map, with varying levels of detail information, based on the scaling of the map. Users can pan and zoom to explore different maps, compare maps, upload their own research data sets onto desired maps, alter map appearance in ways that facilitate interpretation, visualization and analysis of the given data, and export data, reports and actionable items to help the user initiative.« less
Sakata, H; Taira, M; Kusunoki, M; Murata, A; Tanaka, Y
1997-08-01
Recent neurophysiological studies in alert monkeys have revealed that the parietal association cortex plays a crucial role in depth perception and visually guided hand movement. The following five classes of parietal neurons covering various aspects of these functions have been identified: (1) depth-selective visual-fixation (VF) neurons of the inferior parietal lobule (IPL), representing egocentric distance; (2) depth-movement sensitive (DMS) neurons of V5A and the ventral intraparietal (VIP) area representing direction of linear movement in 3-D space; (3) depth-rotation-sensitive (RS) neurons of V5A and the posterior parietal (PP) area representing direction of rotary movement in space; (4) visually responsive manipulation-related neurons (visual-dominant or visual-and-motor type) of the anterior intraparietal (AIP) area, representing 3-D shape or orientation (or both) of objects for manipulation; and (5) axis-orientation-selective (AOS) and surface-orientation-selective (SOS) neurons in the caudal intraparietal sulcus (cIPS) sensitive to binocular disparity and representing the 3-D orientation of the longitudinal axes and flat surfaces, respectively. Some AOS and SOS neurons are selective in both orientation and shape. Thus the dorsal visual pathway is divided into at least two subsystems, V5A, PP and VIP areas for motion vision and V6, LIP and cIPS areas for coding position and 3-D features. The cIPS sends the signals of 3-D features of objects to the AIP area, which is reciprocally connected to the ventral premotor (F5) area and plays an essential role in matching hand orientation and shaping with 3-D objects for manipulation.
Hoffmann, Susanne; Vega-Zuniga, Tomas; Greiter, Wolfgang; Krabichler, Quirin; Bley, Alexandra; Matthes, Mariana; Zimmer, Christiane; Firzlaff, Uwe; Luksch, Harald
2016-11-01
The midbrain superior colliculus (SC) commonly features a retinotopic representation of visual space in its superficial layers, which is congruent with maps formed by multisensory neurons and motor neurons in its deep layers. Information flow between layers is suggested to enable the SC to mediate goal-directed orienting movements. While most mammals strongly rely on vision for orienting, some species such as echolocating bats have developed alternative strategies, which raises the question how sensory maps are organized in these animals. We probed the visual system of the echolocating bat Phyllostomus discolor and found that binocular high acuity vision is frontally oriented and thus aligned with the biosonar system, whereas monocular visual fields cover a large area of peripheral space. For the first time in echolocating bats, we could show that in contrast with other mammals, visual processing is restricted to the superficial layers of the SC. The topographic representation of visual space, however, followed the general mammalian pattern. In addition, we found a clear topographic representation of sound azimuth in the deeper collicular layers, which was congruent with the superficial visual space map and with a previously documented map of orienting movements. Especially for bats navigating at high speed in densely structured environments, it is vitally important to transfer and coordinate spatial information between sensors and motor systems. Here, we demonstrate first evidence for the existence of congruent maps of sensory space in the bat SC that might serve to generate a unified representation of the environment to guide motor actions. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Schomaker, Judith; Walper, Daniel; Wittmann, Bianca C; Einhäuser, Wolfgang
2017-04-01
In addition to low-level stimulus characteristics and current goals, our previous experience with stimuli can also guide attentional deployment. It remains unclear, however, if such effects act independently or whether they interact in guiding attention. In the current study, we presented natural scenes including every-day objects that differed in affective-motivational impact. In the first free-viewing experiment, we presented visually-matched triads of scenes in which one critical object was replaced that varied mainly in terms of motivational value, but also in terms of valence and arousal, as confirmed by ratings by a large set of observers. Treating motivation as a categorical factor, we found that it affected gaze. A linear-effect model showed that arousal, valence, and motivation predicted fixations above and beyond visual characteristics, like object size, eccentricity, or visual salience. In a second experiment, we experimentally investigated whether the effects of emotion and motivation could be modulated by visual salience. In a medium-salience condition, we presented the same unmodified scenes as in the first experiment. In a high-salience condition, we retained the saturation of the critical object in the scene, and decreased the saturation of the background, and in a low-salience condition, we desaturated the critical object while retaining the original saturation of the background. We found that highly salient objects guided gaze, but still found additional additive effects of arousal, valence and motivation, confirming that higher-level factors can also guide attention, as measured by fixations towards objects in natural scenes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ogourtsova, Tatiana; Archambault, Philippe; Lamontagne, Anouk
2015-12-01
Unilateral spatial neglect (USN), a highly prevalent post-stroke impairment, refers to one's inability to orient or respond to stimuli located in the contralesional visual hemispace. Unilateral spatial neglect has been shown to strongly affect motor performance in functional activities, including non-affected upper extremity (UE) movements. To date, our understanding of the effects of USN on goal-directed UE movements is limited and comparing performance of individuals post-stroke with and without USN is required. To determine, in individuals with stroke, how does the presence of USN, in comparison to the absence of USN, impacts different types of goal-directed movements of the non-affected UE. The present review approach consisted of a comprehensive literature search, an assessment of the quality of the selected studies and qualitative data analysis. A total of 20 studies of moderate to high quality were selected. The USN-specific impairments were found in tasks that required a perceptual, memory-guided or delayed actions, and fewer impairments were found in tasks that required an immediate action to a predefined target. The results indicate that USN contributes to deficits observed in action execution with the non-effected UE that requires greater perceptual demands.
Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid.
Kidd, Gerald
2017-10-17
Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired "target" talker while ignoring the speech from unwanted "masker" talkers and other sources of sound. This listening situation forms the classic "cocktail party problem" described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. This approach, embodied in a prototype "visually guided hearing aid" (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based "spatial filter" operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in "informational masking." The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in "energetic masking." Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation. http://cred.pubs.asha.org/article.aspx?articleid=2601621.
Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid
2017-01-01
Purpose Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired “target” talker while ignoring the speech from unwanted “masker” talkers and other sources of sound. This listening situation forms the classic “cocktail party problem” described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. Method This approach, embodied in a prototype “visually guided hearing aid” (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. Results The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based “spatial filter” operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in “informational masking.” The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in “energetic masking.” Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Conclusions Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601621 PMID:29049603
Tandonnet, Christophe; Garry, Michael I; Summers, Jeffery J
2013-07-01
To make a decision may rely on accumulating evidence in favor of one alternative until a threshold is reached. Sequential-sampling models differ by the way of accumulating evidence and the link with action implementation. Here, we tested a model's prediction of an early action implementation specific to potential actions. We assessed the dynamics of action implementation in go/no-go and between-hand choice tasks by transcranial magnetic stimulation of the motor cortex (single- or paired-pulse TMS; 3-ms interstimulus interval). Prior to implementation of the selected action, the amplitude of the motor evoked potential first increased whatever the visual stimulus but only for the hand potentially involved in the to-be-produced action. These findings suggest that visual stimuli can trigger an early motor activation specific to potential actions, consistent with race-like models with continuous transmission between decision making and action implementation. Copyright © 2013 Society for Psychophysiological Research.
ERIC Educational Resources Information Center
Gaver, Wayne
Presented is an industrial arts curriculum guide for woodworking which developed out of a 3 year program designed to meet the unmet vocational education needs of visually impaired students enrolled in junior high, secondary, and community colleges in a five county region of California, and to provide inservice training to regular vocational…
Enhancing visual search abilities of people with intellectual disabilities.
Li-Tsang, Cecilia W P; Wong, Jackson K K
2009-01-01
This study aimed to evaluate the effects of cueing in visual search paradigm for people with and without intellectual disabilities (ID). A total of 36 subjects (18 persons with ID and 18 persons with normal intelligence) were recruited using convenient sampling method. A series of experiments were conducted to compare guided cue strategies using either motion contrast or additional cue to basic search task. Repeated measure ANOVA and post hoc multiple comparison tests were used to compare each cue strategy. Results showed that the use of guided strategies was able to capture focal attention in an autonomic manner in the ID group (Pillai's Trace=5.99, p<0.0001). Both guided cue and guided motion search tasks demonstrated functionally similar effects that confirmed the non-specific character of salience. These findings suggested that the visual search efficiency of people with ID was greatly improved if the target was made salient using cueing effect when the complexity of the display increased (i.e. set size increased). This study could have an important implication for the design of the visual searching format of any computerized programs developed for people with ID in learning new tasks.
Laurent, Vincent; Balleine, Bernard W
2015-04-20
The capacity to extract causal knowledge from the environment allows us to predict future events and to use those predictions to decide on a course of action. Although evidence of such causal reasoning has long been described, recent evidence suggests that using predictive knowledge to guide decision-making in this way is predicated on reasoning about causes in two quite distinct ways: choosing an action can be based on the interaction between predictive information and the consequences of that action, or, alternatively, actions can be selected based on the consequences that they do not produce. The latter counterfactual reasoning is highly adaptive because it allows us to use information about both present and absent events to guide decision-making. Nevertheless, although there is now evidence to suggest that animals other than humans, including rats and birds, can engage in causal reasoning of one kind or another, there is currently no evidence that they use counterfactual reasoning to guide choice. To assess this question, we gave rats the opportunity to learn new action-outcome relationships, after which we probed the structure of this learning by presenting excitatory and inhibitory cues predicting that the specific outcomes of their actions would either occur or would not occur. Whereas the excitors biased choice toward the action delivering the predicted outcome, the inhibitory cues selectively elevated actions predicting the absence of the inhibited outcome, suggesting that rats encoded the counterfactual action-outcome mappings and were able to use them to guide choice. Copyright © 2015 Elsevier Ltd. All rights reserved.
Jastorff, Jan; Clavagnier, Simon; Gergely, György; Orban, Guy A
2011-02-01
Performing goal-directed actions toward an object in accordance with contextual constraints, such as the presence or absence of an obstacle, has been widely used as a paradigm for assessing the capacity of infants or nonhuman primates to evaluate the rationality of others' actions. Here, we have used this paradigm in a functional magnetic resonance imaging experiment to visualize the cortical regions involved in the assessment of action rationality while controlling for visual differences in the displays and directly correlating magnetic resonance activity with rationality ratings. Bilateral middle temporal gyrus (MTG) regions, anterior to extrastriate body area and the human middle temporal complex, were involved in the visual evaluation of action rationality. These MTG regions are embedded in the superior temporal sulcus regions processing the kinematics of observed actions. Our results suggest that rationality is assessed initially by purely visual computations, combining the kinematics of the action with the physical constraints of the environmental context. The MTG region seems to be sensitive to the contingent relationship between a goal-directed biological action and its relevant environmental constraints, showing increased activity when the expected pattern of rational goal attainment is violated.
Neuronal pathway finding: from neurons to initial neural networks.
Roscigno, Cecelia I
2004-10-01
Neuronal pathway finding is crucial for structured cellular organization and development of neural circuits within the nervous system. Neuronal pathway finding within the visual system has been extensively studied and therefore is used as a model to review existing knowledge regarding concepts of this developmental process. General principles of neuron pathway finding throughout the nervous system exist. Comprehension of these concepts guides neuroscience nurses in gaining an understanding of the developmental course of action, the implications of different anomalies, as well as the theoretical basis and nursing implications of some provocative new therapies being proposed to treat neurodegenerative diseases and neurologic injuries. These therapies have limitations in light of current ethical, developmental, and delivery modes and what is known about the development of neuronal pathways.
2017-01-01
The pulvinar complex is interconnected extensively with brain regions involved in spatial processing and eye movement control. Recent inactivation studies have shown that the dorsal pulvinar (dPul) plays a role in saccade target selection; however, it remains unknown whether it exerts effects on visual processing or at planning/execution stages. We used electrical microstimulation of the dPul while monkeys performed saccade tasks toward instructed and freely chosen targets. Timing of stimulation was varied, starting before, at, or after onset of target(s). Stimulation affected saccade properties and target selection in a time-dependent manner. Stimulation starting before but overlapping with target onset shortened saccadic reaction times (RTs) for ipsiversive (to the stimulation site) target locations, whereas stimulation starting at and after target onset caused systematic delays for both ipsiversive and contraversive locations. Similarly, stimulation starting before the onset of bilateral targets increased ipsiversive target choices, whereas stimulation after target onset increased contraversive choices. Properties of dPul neurons and stimulation effects were consistent with an overall contraversive drive, with varying outcomes contingent upon behavioral demands. RT and choice effects were largely congruent in the visually-guided task, but stimulation during memory-guided saccades, while influencing RTs and errors, did not affect choice behavior. Together, these results show that the dPul plays a primary role in action planning as opposed to visual processing, that it exerts its strongest influence on spatial choices when decision and action are temporally close, and that this choice effect can be dissociated from motor effects on saccade initiation and execution. SIGNIFICANCE STATEMENT Despite a recent surge of interest, the core function of the pulvinar, the largest thalamic complex in primates, remains elusive. This understanding is crucial given the central role of the pulvinar in current theories of integrative brain functions supporting cognition and goal-directed behaviors, but electrophysiological and causal interference studies of dorsal pulvinar (dPul) are rare. Building on our previous studies that pharmacologically suppressed dPul activity for several hours, here we used transient electrical microstimulation at different periods while monkeys performed instructed and choice eye movement tasks, to determine time-specific contributions of pulvinar to saccade generation and decision making. We show that stimulation effects depend on timing and behavioral state and that effects on choices can be dissociated from motor effects. PMID:28119401
Appelbaum, L Gregory; Cain, Matthew S; Darling, Elise F; Mitroff, Stephen R
2013-08-01
Action video game playing has been experimentally linked to a number of perceptual and cognitive improvements. These benefits are captured through a wide range of psychometric tasks and have led to the proposition that action video game experience may promote the ability to extract statistical evidence from sensory stimuli. Such an advantage could arise from a number of possible mechanisms: improvements in visual sensitivity, enhancements in the capacity or duration for which information is retained in visual memory, or higher-level strategic use of information for decision making. The present study measured the capacity and time course of visual sensory memory using a partial report performance task as a means to distinguish between these three possible mechanisms. Sensitivity measures and parameter estimates that describe sensory memory capacity and the rate of memory decay were compared between individuals who reported high evels and low levels of action video game experience. Our results revealed a uniform increase in partial report accuracy at all stimulus-to-cue delays for action video game players but no difference in the rate or time course of the memory decay. The present findings suggest that action video game playing may be related to enhancements in the initial sensitivity to visual stimuli, but not to a greater retention of information in iconic memory buffers.
Proulx, Michael J.; Gwinnutt, James; Dell’Erba, Sara; Levy-Tzedek, Shelly; de Sousa, Alexandra A.; Brown, David J.
2015-01-01
Vision is the dominant sense for perception-for-action in humans and other higher primates. Advances in sight restoration now utilize the other intact senses to provide information that is normally sensed visually through sensory substitution to replace missing visual information. Sensory substitution devices translate visual information from a sensor, such as a camera or ultrasound device, into a format that the auditory or tactile systems can detect and process, so the visually impaired can see through hearing or touch. Online control of action is essential for many daily tasks such as pointing, grasping and navigating, and adapting to a sensory substitution device successfully requires extensive learning. Here we review the research on sensory substitution for vision restoration in the context of providing the means of online control for action in the blind or blindfolded. It appears that the use of sensory substitution devices utilizes the neural visual system; this suggests the hypothesis that sensory substitution draws on the same underlying mechanisms as unimpaired visual control of action. Here we review the current state of the art for sensory substitution approaches to object recognition, localization, and navigation, and the potential these approaches have for revealing a metamodal behavioral and neural basis for the online control of action. PMID:26599473
ERIC Educational Resources Information Center
Sandler, Joanne
This community action guide was developed to implement the strategies for the advancement of women developed at the United Nations world conference in Nairobi that ended the Decade for Women in 1985. The guide is intended to: (1) increase understanding and awareness of the existence of the Nairobi Forward-Looking Strategies for the Advancement of…
ERIC Educational Resources Information Center
Ekstrom, Anna; Lindwall, Oskar; Saljo, Roger
2009-01-01
This article concerns a central issue in education as an institutional activity: instructions and their role in guiding student activities and understanding. In the study, we investigate the tensions between specifics and generalities in the joint production of guided action. This issue is explored in the context of handicraft education--or more…
ERIC Educational Resources Information Center
Solution Tree, 2010
2010-01-01
This action guide is intended to assist in the reading of and reflection upon "Learning by Doing: A Handbook for Professional Learning Communities at Work, Second Edition" by Richard DuFour, Rebecca DuFour, Richard Eaker, and Thomas Many. The guide can be used by an individual, a small group, or an entire faculty to identify key points,…
A Closer Look at Visual Manuals.
ERIC Educational Resources Information Center
van der Meij, Hans
1996-01-01
Examines the visual manual genre, discussing main forms and functions of step-by-step and guided tour manuals in detail. Examines whether a visual manual helps computer users realize tasks faster and more accurately than a non-visual manual. Finds no effects on accuracy, but speedier task execution by 35% for visual manuals. Concludes there is no…
Teaching Students with Visual Impairments. Programming for Students with Special Needs. No. 5.
ERIC Educational Resources Information Center
Alberta Dept. of Education, Edmonton. Special Education Branch.
This resource guide offers suggestions and resources to help provide successful school experiences for students who are blind or visually impaired. Individual sections address: (1) the nature of visual impairment, the specific needs and expectations of students with visual impairment, and the educational implications of visual impairment; (2)…
Kruskal, Jonathan B; Reedy, Allen; Pascal, Laurie; Rosen, Max P; Boiselle, Phillip M
2012-01-01
Many hospital radiology departments are adopting "lean" methods developed in automobile manufacturing to improve operational efficiency, eliminate waste, and optimize the value of their services. The lean approach, which emphasizes process analysis, has particular relevance to radiology departments, which depend on a smooth flow of patients and uninterrupted equipment function for efficient operation. However, the application of lean methods to isolated problems is not likely to improve overall efficiency or to produce a sustained improvement. Instead, the authors recommend a gradual but continuous and comprehensive "lean transformation" of work philosophy and workplace culture. Fundamental principles that must consistently be put into action to achieve such a transformation include equal involvement of and equal respect for all staff members, elimination of waste, standardization of work processes, improvement of flow in all processes, use of visual cues to communicate and inform, and use of specific tools to perform targeted data collection and analysis and to implement and guide change. Many categories of lean tools are available to facilitate these tasks: value stream mapping for visualizing the current state of a process and identifying activities that add no value; root cause analysis for determining the fundamental cause of a problem; team charters for planning, guiding, and communicating about change in a specific process; management dashboards for monitoring real-time developments; and a balanced scorecard for strategic oversight and planning in the areas of finance, customer service, internal operations, and staff development. © RSNA, 2012.
Korneeva, E V; Tiunova, A A; Aleksandrov, L I; Golubeva, T B; Anokhin, K V
2014-01-01
The present study analyzed expression of transcriptional factors c-Fos and ZENK in 9-day-old pied flycatcher nestlings' (Ficedula hypoleuca) telencephalic auditory centers (field L, caudomedial nidopallium and caudomedial mesopallium) involved in the acoustically-guided defense behavior. Species-typical alarm call was presented to the young in three groups: 1--intact group (sighted control), 2--nestlings visually deprived just before the experiment for a short time (unsighted control) 3--nestlings visually deprived right after hatching (experimental deprivation). Induction of c-Fos as well as ZENK in nestlings from the experimental deprivation group was decreased in both hemispheres as compared with intact group. In the group of unsighted control, only the decrease of c-Fos induction was observed exclusively in the right hemisphere. These findings suggest that limitation of visual input changes the population of neurons involved into the acoustically-guided behavior, the effect being dependant from the duration of deprivation.
A novel computational model to probe visual search deficits during motor performance
Singh, Tarkeshwar; Fridriksson, Julius; Perry, Christopher M.; Tryon, Sarah C.; Ross, Angela; Fritz, Stacy
2016-01-01
Successful execution of many motor skills relies on well-organized visual search (voluntary eye movements that actively scan the environment for task-relevant information). Although impairments of visual search that result from brain injuries are linked to diminished motor performance, the neural processes that guide visual search within this context remain largely unknown. The first objective of this study was to examine how visual search in healthy adults and stroke survivors is used to guide hand movements during the Trail Making Test (TMT), a neuropsychological task that is a strong predictor of visuomotor and cognitive deficits. Our second objective was to develop a novel computational model to investigate combinatorial interactions between three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing). We predicted that stroke survivors would exhibit deficits in integrating the three underlying processes, resulting in deteriorated overall task performance. We found that normal TMT performance is associated with patterns of visual search that primarily rely on spatial planning and/or working memory (but not peripheral visual processing). Our computational model suggested that abnormal TMT performance following stroke is associated with impairments of visual search that are characterized by deficits integrating spatial planning and working memory. This innovative methodology provides a novel framework for studying how the neural processes underlying visual search interact combinatorially to guide motor performance. NEW & NOTEWORTHY Visual search has traditionally been studied in cognitive and perceptual paradigms, but little is known about how it contributes to visuomotor performance. We have developed a novel computational model to examine how three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing) contribute to visual search during a visuomotor task. We show that deficits integrating spatial planning and working memory underlie abnormal performance in stroke survivors with frontoparietal damage. PMID:27733596
Using Visual Imagery in the Classroom.
ERIC Educational Resources Information Center
Grabow, Beverly
1981-01-01
The use of visual imagery, visualization, and guided and unguided fantasy has potential as a teaching tool for use with learning disabled children. Visualization utilized in a gamelike atmosphere can help the student learn new concepts, can positively effect social behaviors, and can help with emotional control. (SB)
Infants' prospective control during object manipulation in an uncertain environment.
Gottwald, Janna M; Gredebäck, Gustaf
2015-08-01
This study investigates how infants use visual and sensorimotor information to prospectively control their actions. We gave 14-month-olds two objects of different weight and observed how high they were lifted, using a Qualisys Motion Capture System. In one condition, the two objects were visually distinct (different color condition) in another they were visually identical (same color condition). Lifting amplitudes of the first movement unit were analyzed in order to assess prospective control. Results demonstrate that infants lifted a light object higher than a heavy object, especially when vision could be used to assess weight (different color condition). When being confronted with two visually identical objects of different weight (same color condition), infants showed a different lifting pattern than what could be observed in the different color condition, expressed by a significant interaction effect between object weight and color condition on lifting amplitude. These results indicate that (a) visual information about object weight can be used to prospectively control lifting actions and that (b) infants are able to prospectively control their lifting actions even without visual information about object weight. We argue that infants, in the absence of reliable visual information about object weight, heighten their dependence on non-visual information (tactile, sensorimotor memory) in order to estimate weight and pre-adjust their lifting actions in a prospective manner.
ERIC Educational Resources Information Center
Cangemi, Sam
This guide describes and illustrates 50 perceptual games for preschool children which may be constructed by teachers. Inexpensive, easily obtained game materials are suggested. The use of tactile and visual perceptual games gives children opportunities to make choices and discriminations, and provides reading readiness experiences. Games depicted…
Attention to body-parts varies with visual preference and verb-effector associations.
Boyer, Ty W; Maouene, Josita; Sethuraman, Nitya
2017-05-01
Theories of embodied conceptual meaning suggest fundamental relations between others' actions, language, and our own actions and visual attention processes. Prior studies have found that when people view an image of a neutral body in a scene they first look toward, in order, the head, torso, hands, and legs. Other studies show associations between action verbs and the body-effectors used in performing the action (e.g., "jump" with feet/legs; "talk" with face/head). In the present experiment, the visual attention of participants was recorded with a remote eye-tracking system while they viewed an image of an actor pantomiming an action and heard a concrete action verb. Participants manually responded whether or not the action image was a good example of the verb they heard. The eye-tracking results confirmed that participants looked at the head most, followed by the hands, and the feet least of all; however, visual attention to each of the body-parts also varied as a function of the effector associated with the spoken verb on image/verb congruent trials, particularly for verbs associated with the legs. Overall, these results suggest that language influences some perceptual processes; however, hearing auditory verbs did not alter the previously reported fundamental hierarchical sequence of directed attention, and fixations on specific body-effectors may not be essential for verb comprehension as peripheral visual cues may be sufficient to perform the task.
Creative Visualization Activities.
ERIC Educational Resources Information Center
Fugitt, Eva D.
1986-01-01
Presents a series of classroom exercises and activities that stimulate children's creativity through the use of visualization. Discusses procedures for guided imagery and offers some examples of "trips" to imaginary places. Proposes visualization as a warm-up exercise before art lessons. (DR)
Primary Visual Cortex as a Saliency Map: A Parameter-Free Prediction and Its Test by Behavioral Data
Zhaoping, Li; Zhe, Li
2015-01-01
It has been hypothesized that neural activities in the primary visual cortex (V1) represent a saliency map of the visual field to exogenously guide attention. This hypothesis has so far provided only qualitative predictions and their confirmations. We report this hypothesis’ first quantitative prediction, derived without free parameters, and its confirmation by human behavioral data. The hypothesis provides a direct link between V1 neural responses to a visual location and the saliency of that location to guide attention exogenously. In a visual input containing many bars, one of them saliently different from all the other bars which are identical to each other, saliency at the singleton’s location can be measured by the shortness of the reaction time in a visual search for singletons. The hypothesis predicts quantitatively the whole distribution of the reaction times to find a singleton unique in color, orientation, and motion direction from the reaction times to find other types of singletons. The prediction matches human reaction time data. A requirement for this successful prediction is a data-motivated assumption that V1 lacks neurons tuned simultaneously to color, orientation, and motion direction of visual inputs. Since evidence suggests that extrastriate cortices do have such neurons, we discuss the possibility that the extrastriate cortices play no role in guiding exogenous attention so that they can be devoted to other functions like visual decoding and endogenous attention. PMID:26441341
Acceptance of Dog Guides and Daily Stress Levels of Dog Guide Users and Nonusers
ERIC Educational Resources Information Center
Matsunaka, Kumiko; Koda, Naoko
2008-01-01
The degree of acceptance of dog guides at public facilities, which is required by law in Japan, was investigated, and evidence of rejection was found. Japanese people with visual impairments who used dog guides reported higher daily stress levels than did those who did not use dog guides. (Contains 3 tables and 1 figure.)
ERIC Educational Resources Information Center
National Accreditation Council for Agencies Serving the Blind and Visually Handicapped, New York, NY.
This self-study and evaluation guide on orientation and mobility services (dog guide program emphasis) is one of 28 guides designed for organizations undertaking a self-study as part of the process for accreditation from the National Accreditation Council (NAC) for agencies serving the blind and visually handicapped. Provided are lists of…
Secondary adaptation of memory-guided saccades
Srimal, Riju; Curtis, Clayton E.
2011-01-01
Adaptation of saccade gains in response to errors keeps vision and action co-registered in the absence of awareness or effort. Timing is key, as the visual error must be available shortly after the saccade is generated or adaptation does not occur. Here, we tested the hypothesis that when feedback is delayed, learning still occurs, but does so through small secondary corrective saccades. Using a memory-guided saccade task, we gave feedback about the accuracy of saccades that was falsely displaced by a consistent amount, but only after long delays. Despite the delayed feedback, over time subjects improved in accuracy toward the false feedback. They did so not by adjusting their primary saccades, but via directed corrective saccades made before feedback was given. We propose that saccade learning may be driven by different types of feedback teaching signals. One teaching signal relies upon a tight temporal relation with the saccade and contributes to obligatory learning independent of awareness. When this signal is ineffective due to delayed error feedback, a second compensatory teaching signal enables flexible adjustments to the spatial goal of saccades and helps maintain sensorimotor accuracy. PMID:20803135
Young children's recall and reconstruction of audio and audiovisual narratives.
Gibbons, J; Anderson, D R; Smith, R; Field, D E; Fischer, C
1986-08-01
It has been claimed that the visual component of audiovisual media dominates young children's cognitive processing. This experiment examines the effects of input modality while controlling the complexity of the visual and auditory content and while varying the comprehension task (recall vs. reconstruction). 4- and 7-year-olds were presented brief stories through either audio or audiovisual media. The audio version consisted of narrated character actions and character utterances. The narrated actions were matched to the utterances on the basis of length and propositional complexity. The audiovisual version depicted the actions visually by means of stop animation instead of by auditory narrative statements. The character utterances were the same in both versions. Audiovisual input produced superior performance on explicit information in the 4-year-olds and produced more inferences at both ages. Because performance on utterances was superior in the audiovisual condition as compared to the audio condition, there was no evidence that visual input inhibits processing of auditory information. Actions were more likely to be produced by the younger children than utterances, regardless of input medium, indicating that prior findings of visual dominance may have been due to the salience of narrative action. Reconstruction, as compared to recall, produced superior depiction of actions at both ages as well as more constrained relevant inferences and narrative conventions.
Clothing Construction: An Instructional Package with Adaptations for Visually Impaired Individuals.
ERIC Educational Resources Information Center
Crawford, Glinda B.; And Others
Developed for the home economics teacher of mainstreamed visually impaired students, this guide provides clothing instruction lesson plans for the junior high level. First, teacher guidelines are given, including characteristics of the visually impaired, orienting such students to the classroom, orienting class members to the visually impaired,…
Functional Dissociation between Perception and Action Is Evident Early in Life
ERIC Educational Resources Information Center
Hadad, Bat-Sheva; Avidan, Galia; Ganel, Tzvi
2012-01-01
The functional distinction between vision for perception and vision for action is well documented in the mature visual system. Ganel and colleagues recently provided direct evidence for this dissociation, showing that while visual processing for perception follows Weber's fundamental law of psychophysics, action violates this law. We tracked the…
Posture-based processing in visual short-term memory for actions.
Vicary, Staci A; Stevens, Catherine J
2014-01-01
Visual perception of human action involves both form and motion processing, which may rely on partially dissociable neural networks. If form and motion are dissociable during visual perception, then they may also be dissociable during their retention in visual short-term memory (VSTM). To elicit form-plus-motion and form-only processing of dance-like actions, individual action frames can be presented in the correct or incorrect order. The former appears coherent and should elicit action perception, engaging both form and motion pathways, whereas the latter appears incoherent and should elicit posture perception, engaging form pathways alone. It was hypothesized that, if form and motion are dissociable in VSTM, then recognition of static body posture should be better after viewing incoherent than after viewing coherent actions. However, as VSTM is capacity limited, posture-based encoding of actions may be ineffective with increased number of items or frames. Using a behavioural change detection task, recognition of a single test posture was significantly more likely after studying incoherent than after studying coherent stimuli. However, this effect only occurred for spans of two (but not three) items and for stimuli with five (but not nine) frames. As in perception, posture and motion are dissociable in VSTM.
Neural Integration in Body Perception.
Ramsey, Richard
2018-06-19
The perception of other people is instrumental in guiding social interactions. For example, the appearance of the human body cues a wide range of inferences regarding sex, age, health, and personality, as well as emotional state and intentions, which influence social behavior. To date, most neuroscience research on body perception has aimed to characterize the functional contribution of segregated patches of cortex in the ventral visual stream. In light of the growing prominence of network architectures in neuroscience, the current article reviews neuroimaging studies that measure functional integration between different brain regions during body perception. The review demonstrates that body perception is not restricted to processing in the ventral visual stream but instead reflects a functional alliance between the ventral visual stream and extended neural systems associated with action perception, executive functions, and theory of mind. Overall, these findings demonstrate how body percepts are constructed through interactions in distributed brain networks and underscore that functional segregation and integration should be considered together when formulating neurocognitive theories of body perception. Insight from such an updated model of body perception generalizes to inform the organizational structure of social perception and cognition more generally and also informs disorders of body image, such as anorexia nervosa, which may rely on atypical integration of body-related information.
Action-outcome learning and prediction shape the window of simultaneity of audiovisual outcomes.
Desantis, Andrea; Haggard, Patrick
2016-08-01
To form a coherent representation of the objects around us, the brain must group the different sensory features composing these objects. Here, we investigated whether actions contribute in this grouping process. In particular, we assessed whether action-outcome learning and prediction contribute to audiovisual temporal binding. Participants were presented with two audiovisual pairs: one pair was triggered by a left action, and the other by a right action. In a later test phase, the audio and visual components of these pairs were presented at different onset times. Participants judged whether they were simultaneous or not. To assess the role of action-outcome prediction on audiovisual simultaneity, each action triggered either the same audiovisual pair as in the learning phase ('predicted' pair), or the pair that had previously been associated with the other action ('unpredicted' pair). We found the time window within which auditory and visual events appeared simultaneous increased for predicted compared to unpredicted pairs. However, no change in audiovisual simultaneity was observed when audiovisual pairs followed visual cues, rather than voluntary actions. This suggests that only action-outcome learning promotes temporal grouping of audio and visual effects. In a second experiment we observed that changes in audiovisual simultaneity do not only depend on our ability to predict what outcomes our actions generate, but also on learning the delay between the action and the multisensory outcome. When participants learned that the delay between action and audiovisual pair was variable, the window of audiovisual simultaneity for predicted pairs increased, relative to a fixed action-outcome pair delay. This suggests that participants learn action-based predictions of audiovisual outcome, and adapt their temporal perception of outcome events based on such predictions. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Simple Smartphone-Based Guiding System for Visually Impaired People
Lin, Bor-Shing; Lee, Cheng-Che; Chiang, Pei-Ying
2017-01-01
Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them. PMID:28608811
Simple Smartphone-Based Guiding System for Visually Impaired People.
Lin, Bor-Shing; Lee, Cheng-Che; Chiang, Pei-Ying
2017-06-13
Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them.
Bodily action penetrates affective perception
Rigutti, Sara; Gerbino, Walter
2016-01-01
Fantoni & Gerbino (2014) showed that subtle postural shifts associated with reaching can have a strong hedonic impact and affect how actors experience facial expressions of emotion. Using a novel Motor Action Mood Induction Procedure (MAMIP), they found consistent congruency effects in participants who performed a facial emotion identification task after a sequence of visually-guided reaches: a face perceived as neutral in a baseline condition appeared slightly happy after comfortable actions and slightly angry after uncomfortable actions. However, skeptics about the penetrability of perception (Zeimbekis & Raftopoulos, 2015) would consider such evidence insufficient to demonstrate that observer’s internal states induced by action comfort/discomfort affect perception in a top-down fashion. The action-modulated mood might have produced a back-end memory effect capable of affecting post-perceptual and decision processing, but not front-end perception. Here, we present evidence that performing a facial emotion detection (not identification) task after MAMIP exhibits systematic mood-congruent sensitivity changes, rather than response bias changes attributable to cognitive set shifts; i.e., we show that observer’s internal states induced by bodily action can modulate affective perception. The detection threshold for happiness was lower after fifty comfortable than uncomfortable reaches; while the detection threshold for anger was lower after fifty uncomfortable than comfortable reaches. Action valence induced an overall sensitivity improvement in detecting subtle variations of congruent facial expressions (happiness after positive comfortable actions, anger after negative uncomfortable actions), in the absence of significant response bias shifts. Notably, both comfortable and uncomfortable reaches impact sensitivity in an approximately symmetric way relative to a baseline inaction condition. All of these constitute compelling evidence of a genuine top-down effect on perception: specifically, facial expressions of emotion are penetrable by action-induced mood. Affective priming by action valence is a candidate mechanism for the influence of observer’s internal states on properties experienced as phenomenally objective and yet loaded with meaning. PMID:26893964
The Visual Geophysical Exploration Environment: A Multi-dimensional Scientific Visualization
NASA Astrophysics Data System (ADS)
Pandya, R. E.; Domenico, B.; Murray, D.; Marlino, M. R.
2003-12-01
The Visual Geophysical Exploration Environment (VGEE) is an online learning environment designed to help undergraduate students understand fundamental Earth system science concepts. The guiding principle of the VGEE is the importance of hands-on interaction with scientific visualization and data. The VGEE consists of four elements: 1) an online, inquiry-based curriculum for guiding student exploration; 2) a suite of El Nino-related data sets adapted for student use; 3) a learner-centered interface to a scientific visualization tool; and 4) a set of concept models (interactive tools that help students understand fundamental scientific concepts). There are two key innovations featured in this interactive poster session. One is the integration of concept models and the visualization tool. Concept models are simple, interactive, Java-based illustrations of fundamental physical principles. We developed eight concept models and integrated them into the visualization tool to enable students to probe data. The ability to probe data using a concept model addresses the common problem of transfer: the difficulty students have in applying theoretical knowledge to everyday phenomenon. The other innovation is a visualization environment and data that are discoverable in digital libraries, and installed, configured, and used for investigations over the web. By collaborating with the Integrated Data Viewer developers, we were able to embed a web-launchable visualization tool and access to distributed data sets into the online curricula. The Thematic Real-time Environmental Data Distributed Services (THREDDS) project is working to provide catalogs of datasets that can be used in new VGEE curricula under development. By cataloging this curricula in the Digital Library for Earth System Education (DLESE), learners and educators can discover the data and visualization tool within a framework that guides their use.
Effort-based cost-benefit valuation and the human brain
Croxson, Paula L; Walton, Mark E; O'Reilly, Jill X; Behrens, Timothy EJ; Rushworth, Matthew FS
2010-01-01
In both the wild and the laboratory, animals' preferences for one course of action over another reflect not just reward expectations but also the cost in terms of effort that must be invested in pursuing the course of action. The ventral striatum and dorsal anterior cingulate cortex (ACCd) are implicated in the making of cost-benefit decisions in the rat but there is little information about how effort costs are processed and influence calculations of expected net value in other mammals including the human. We carried out a functional magnetic resonance imaging (fMRI) study to determine whether and where activity in the human brain was available to guide effort-based cost-benefit valuation. Subjects were scanned while they performed a series of effortful actions to obtain secondary reinforcers. At the beginning of each trial, subjects were presented with one of eight different visual cues which they had learned indicated how much effort the course of action would entail and how much reward could be expected at its completion. Cue-locked activity in the ventral striatum and midbrain reflected the net value of the course of action, signaling the expected amount of reward discounted by the amount of effort to be invested. Activity in ACCd also reflected the interaction of both expected reward and effort costs. Posterior orbitofrontal and insular activity, however, only reflected the expected reward magnitude. The ventral striatum and anterior cingulate cortex may be the substrate of effort-based cost-benefit valuation in primates as well as in rats. PMID:19357278
Fee, Michale S.
2012-01-01
In its simplest formulation, reinforcement learning is based on the idea that if an action taken in a particular context is followed by a favorable outcome, then, in the same context, the tendency to produce that action should be strengthened, or reinforced. While reinforcement learning forms the basis of many current theories of basal ganglia (BG) function, these models do not incorporate distinct computational roles for signals that convey context, and those that convey what action an animal takes. Recent experiments in the songbird suggest that vocal-related BG circuitry receives two functionally distinct excitatory inputs. One input is from a cortical region that carries context information about the current “time” in the motor sequence. The other is an efference copy of motor commands from a separate cortical brain region that generates vocal variability during learning. Based on these findings, I propose here a general model of vertebrate BG function that combines context information with a distinct motor efference copy signal. The signals are integrated by a learning rule in which efference copy inputs gate the potentiation of context inputs (but not efference copy inputs) onto medium spiny neurons in response to a rewarded action. The hypothesis is described in terms of a circuit that implements the learning of visually guided saccades. The model makes testable predictions about the anatomical and functional properties of hypothesized context and efference copy inputs to the striatum from both thalamic and cortical sources. PMID:22754501
Fee, Michale S
2012-01-01
In its simplest formulation, reinforcement learning is based on the idea that if an action taken in a particular context is followed by a favorable outcome, then, in the same context, the tendency to produce that action should be strengthened, or reinforced. While reinforcement learning forms the basis of many current theories of basal ganglia (BG) function, these models do not incorporate distinct computational roles for signals that convey context, and those that convey what action an animal takes. Recent experiments in the songbird suggest that vocal-related BG circuitry receives two functionally distinct excitatory inputs. One input is from a cortical region that carries context information about the current "time" in the motor sequence. The other is an efference copy of motor commands from a separate cortical brain region that generates vocal variability during learning. Based on these findings, I propose here a general model of vertebrate BG function that combines context information with a distinct motor efference copy signal. The signals are integrated by a learning rule in which efference copy inputs gate the potentiation of context inputs (but not efference copy inputs) onto medium spiny neurons in response to a rewarded action. The hypothesis is described in terms of a circuit that implements the learning of visually guided saccades. The model makes testable predictions about the anatomical and functional properties of hypothesized context and efference copy inputs to the striatum from both thalamic and cortical sources.
Hongzhang, Hong; Xiaojuan, Qin; Shengwei, Zhang; Feixiang, Xiang; Yujie, Xu; Haibing, Xiao; Gallina, Kazobinka; Wen, Ju; Fuqing, Zeng; Xiaoping, Zhang; Mingyue, Ding; Huageng, Liang; Xuming, Zhang
2018-05-17
To evaluate the effect of real-time three-dimensional (3D) ultrasonography (US) in guiding percutaneous nephrostomy (PCN). A hydronephrosis model was devised in which the ureters of 16 beagles were obstructed. The beagles were divided equally into groups 1 and 2. In group 1, the PCN was performed using real-time 3D US guidance, while in group 2 the PCN was guided using two-dimensional (2D) US. Visualization of the needle tract, length of puncture time and number of puncture times were recorded for the two groups. In group 1, score for visualization of the needle tract, length of puncture time and number of puncture times were 3, 7.3 ± 3.1 s and one time, respectively. In group 2, the respective results were 1.4 ± 0.5, 21.4 ± 5.8 s and 2.1 ± 0.6 times. The visualization of needle tract in group 1 was superior to that in group 2, and length of puncture time and number of puncture times were both lower in group 1 than in group 2. Real-time 3D US-guided PCN is superior to 2D US-guided PCN in terms of visualization of needle tract and the targeted pelvicalyceal system, leading to quick puncture. Real-time 3D US-guided puncture of the kidney holds great promise for clinical implementation in PCN. © 2018 The Authors BJU International © 2018 BJU International Published by John Wiley & Sons Ltd.
Do You "See'" What I "See"? Differentiation of Visual Action Words
ERIC Educational Resources Information Center
Dickinson, Joël; Cirelli, Laura; Szeligo, Frank
2014-01-01
Dickinson and Szeligo ("Can J Exp Psychol" 62(4):211--222, 2008) found that processing time for simple visual stimuli was affected by the visual action participants had been instructed to perform on these stimuli (e.g., see, distinguish). It was concluded that these effects reflected the differences in the durations of these various…
Toward a self-organizing pre-symbolic neural model representing sensorimotor primitives.
Zhong, Junpei; Cangelosi, Angelo; Wermter, Stefan
2014-01-01
The acquisition of symbolic and linguistic representations of sensorimotor behavior is a cognitive process performed by an agent when it is executing and/or observing own and others' actions. According to Piaget's theory of cognitive development, these representations develop during the sensorimotor stage and the pre-operational stage. We propose a model that relates the conceptualization of the higher-level information from visual stimuli to the development of ventral/dorsal visual streams. This model employs neural network architecture incorporating a predictive sensory module based on an RNNPB (Recurrent Neural Network with Parametric Biases) and a horizontal product model. We exemplify this model through a robot passively observing an object to learn its features and movements. During the learning process of observing sensorimotor primitives, i.e., observing a set of trajectories of arm movements and its oriented object features, the pre-symbolic representation is self-organized in the parametric units. These representational units act as bifurcation parameters, guiding the robot to recognize and predict various learned sensorimotor primitives. The pre-symbolic representation also accounts for the learning of sensorimotor primitives in a latent learning context.
Toward a self-organizing pre-symbolic neural model representing sensorimotor primitives
Zhong, Junpei; Cangelosi, Angelo; Wermter, Stefan
2014-01-01
The acquisition of symbolic and linguistic representations of sensorimotor behavior is a cognitive process performed by an agent when it is executing and/or observing own and others' actions. According to Piaget's theory of cognitive development, these representations develop during the sensorimotor stage and the pre-operational stage. We propose a model that relates the conceptualization of the higher-level information from visual stimuli to the development of ventral/dorsal visual streams. This model employs neural network architecture incorporating a predictive sensory module based on an RNNPB (Recurrent Neural Network with Parametric Biases) and a horizontal product model. We exemplify this model through a robot passively observing an object to learn its features and movements. During the learning process of observing sensorimotor primitives, i.e., observing a set of trajectories of arm movements and its oriented object features, the pre-symbolic representation is self-organized in the parametric units. These representational units act as bifurcation parameters, guiding the robot to recognize and predict various learned sensorimotor primitives. The pre-symbolic representation also accounts for the learning of sensorimotor primitives in a latent learning context. PMID:24550798
Enhanced Lesion Visualization in Image-Guided Noninvasive Surgery With Ultrasound Phased Arrays
2001-10-25
81, 1995. [4] N. Sanghvi et al., “Noninvasive surgery of prostate tissue by high-intensity focused ultrasound ,” IEEE Trans. UFFC, vol. 43, no. 6, pp...ENHANCED LESION VISUALIZATION IN IMAGE-GUIDED NONINVASIVE SURGERY WITH ULTRASOUND PHASED ARRAYS Hui Yao, Pornchai Phukpattaranont and Emad S. Ebbini...Department of Electrical and Computer Engineering University of Minnesota Minneapolis, MN 55455 Abstract- We describe dual-mode ultrasound phased
Supervised guiding long-short term memory for image caption generation based on object classes
NASA Astrophysics Data System (ADS)
Wang, Jian; Cao, Zhiguo; Xiao, Yang; Qi, Xinyuan
2018-03-01
The present models of image caption generation have the problems of image visual semantic information attenuation and errors in guidance information. In order to solve these problems, we propose a supervised guiding Long Short Term Memory model based on object classes, named S-gLSTM for short. It uses the object detection results from R-FCN as supervisory information with high confidence, and updates the guidance word set by judging whether the last output matches the supervisory information. S-gLSTM learns how to extract the current interested information from the image visual se-mantic information based on guidance word set. The interested information is fed into the S-gLSTM at each iteration as guidance information, to guide the caption generation. To acquire the text-related visual semantic information, the S-gLSTM fine-tunes the weights of the network through the back-propagation of the guiding loss. Complementing guidance information at each iteration solves the problem of visual semantic information attenuation in the traditional LSTM model. Besides, the supervised guidance information in our model can reduce the impact of the mismatched words on the caption generation. We test our model on MSCOCO2014 dataset, and obtain better performance than the state-of-the- art models.
Buchanan, John J
2016-01-01
The primary goal of this chapter is to merge together the visual perception perspective of observational learning and the coordination dynamics theory of pattern formation in perception and action. Emphasis is placed on identifying movement features that constrain and inform action-perception and action-production processes. Two sources of visual information are examined, relative motion direction and relative phase. The visual perception perspective states that the topological features of relative motion between limbs and joints remains invariant across an actor's motion and therefore are available for pickup by an observer. Relative phase has been put forth as an informational variable that links perception to action within the coordination dynamics theory. A primary assumption of the coordination dynamics approach is that environmental information is meaningful only in terms of the behavior it modifies. Across a series of single limb tasks and bimanual tasks it is shown that the relative motion and relative phase between limbs and joints is picked up through visual processes and supports observational learning of motor skills. Moreover, internal estimations of motor skill proficiency and competency are linked to the informational content found in relative motion and relative phase. Thus, the chapter links action to perception and vice versa and also links cognitive evaluations to the coordination dynamics that support action-perception and action-production processes.
Amano, Kaoru; Kimura, Toshitaka; Nishida, Shin'ya; Takeda, Tsunehiro; Gomi, Hiroaki
2009-02-01
Human brain uses visual motion inputs not only for generating subjective sensation of motion but also for directly guiding involuntary actions. For instance, during arm reaching, a large-field visual motion is quickly and involuntarily transformed into a manual response in the direction of visual motion (manual following response, MFR). Previous attempts to correlate motion-evoked cortical activities, revealed by brain imaging techniques, with conscious motion perception have resulted only in partial success. In contrast, here we show a surprising degree of similarity between the MFR and the population neural activity measured by magnetoencephalography (MEG). We measured the MFR and MEG induced by the same motion onset of a large-field sinusoidal drifting grating with changing the spatiotemporal frequency of the grating. The initial transient phase of these two responses had very similar spatiotemporal tunings. Specifically, both the MEG and MFR amplitudes increased as the spatial frequency was decreased to, at most, 0.05 c/deg, or as the temporal frequency was increased to, at least, 10 Hz. We also found in peak latency a quantitative agreement (approximately 100-150 ms) and correlated changes against spatiotemporal frequency changes between MEG and MFR. In comparison with these two responses, conscious visual motion detection is known to be most sensitive (i.e., have the lowest detection threshold) at higher spatial frequencies and have longer and more variable response latencies. Our results suggest a close relationship between the properties of involuntary motor responses and motion-evoked cortical activity as reflected by the MEG.
Abnormal functional connectivity density in children with anisometropic amblyopia at resting-state.
Wang, Tianyue; Li, Qian; Guo, Mingxia; Peng, Yanmin; Li, Qingji; Qin, Wen; Yu, Chunshui
2014-05-14
Amblyopia is a developmental disorder resulting from anomalous binocular visual input in early life. Task-based neuroimaging studies have widely investigated cortical functional impairments in amblyopia, but changes in spontaneous neuronal functional activities in amblyopia remain largely unknown. In the present study, functional connectivity density (FCD) mapping, an ultrafast data-driven method based on fMRI, was applied for the first time to investigate changes in cortical functional connectivities in amblyopia during the resting-state. We quantified and compared both short- and long-range FCD in both the brains of children with anisometropic amblyopia (AAC) and normal sighted children (NSC). In contrast to the NSC, the AAC showed significantly decreased short-range FCD in the inferior temporal/fusiform gyri, parieto-occipital and rostrolateral prefrontal cortices, as well as decreased long-range FCD in the premotor cortex, dorsal inferior parietal lobule, frontal-insular and dorsal prefrontal cortices. Furthermore, most regions with reduced long-range FCD in the AAC showed decreased functional connectivity with occipital and posterior parietal cortices in the AAC. The results suggest that chronically poor visual input in amblyopia not only impairs the brain's short-range functional connections in visual pathways and in the frontal cortex, which is important for cognitive control, but also affects long-range functional connections among the visual areas, posterior parietal and frontal cortices that subserve visuomotor and visual-guided actions, visuospatial attention modulation and the integration of salient information. This study provides evidence for abnormal spontaneous brain activities in amblyopia. Copyright © 2014 Elsevier B.V. All rights reserved.
Groundwater: A Community Action Guide.
ERIC Educational Resources Information Center
Boyd, Susan, Ed.; And Others
Designed to be a guide for community action, this booklet examines issues and trends related to groundwater contamination. Basic concepts about groundwater and information about problems affecting it are covered under the categories of (1) what is groundwater? (2) availability and depletion; (3) quality and contamination; (4) public health…
Self-organizing neural integration of pose-motion features for human action recognition
Parisi, German I.; Weber, Cornelius; Wermter, Stefan
2015-01-01
The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented toward human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR) networks that obtain progressively generalized representations of sensory inputs and learn inherent spatio-temporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best results for a public benchmark of domestic daily actions. PMID:26106323
THE CLEAN ENERGY-ENVIRONMENT GUIDE TO ACTION ...
The Guide to Action identifies and describes sixteen clean energy policies and strategies that are delivering economic and environmental results for states. For each policy, the Guide describes: Objectives and benefits of the policy; Examples of states that have implemented the policy; Responsibilities of key players at the state level, including typical roles of the main stakeholders; Opportunities to coordinate implementation with other federal and state policies, partnerships and technical assistance resources; Best practices for policy design, implementation, and evaluation, including state examples; Action steps for states to take when adopting or modifying their clean energy policies, based on existing state experiences; Resources for additional information on individual state policies, legislative and regulatory language, and analytical tools and methods. States participating in the Clean Energy-Environment State Partnership Program will use the Guide to Action to: Develop their own Clean Energy-Environment Action Plan that is appropriate to their state; Identify the roles and responsibilities of key decision-makers, such as environmental regulators, state legislatures, public utility commissioners, and state energy offices; Access and apply technical assistance resources, models, and tools available for state-specific analyses and program implementation; and Learn from each other as they develop their own clean energy programs and policies.
ERIC Educational Resources Information Center
Laakso, Mikko-Jussi; Myller, Niko; Korhonen, Ari
2009-01-01
In this paper, two emerging learning and teaching methods have been studied: collaboration in concert with algorithm visualization. When visualizations have been employed in collaborative learning, collaboration introduces new challenges for the visualization tools. In addition, new theories are needed to guide the development and research of the…
Bullying 101: The Club Crew's Guide to Bullying Prevention
ERIC Educational Resources Information Center
PACER Center, 2013
2013-01-01
"Bullying 101" is the Club Crew's Guide to Bullying Prevention. A visually-friendly, age-appropriate, 16-page colorful guide for students to read or for parents to use when talking with children, this guide describes and explains what bullying is and is not, the roles of other students, and tips on what each student can do to prevent…
Taking Action: An Educator's Guide to Involving Students in Environmental Projects
ERIC Educational Resources Information Center
Council for Environmental Education, 2012
2012-01-01
Developed in cooperation with the World Wildlife Fund, "Taking Action" inspires ideas and provides models for conducting effective environmental projects--projects that dynamically engage students from start to finish. From adopting species to protecting habitats to saving energy and creating publications, this guide will help educators plan,…
ERIC Educational Resources Information Center
Snyder, Sarah A.
This teacher's guide presents teaching suggestions and presentation materials about citizen action in the global environment. Focusing on the nongovernmental organizations (NGOs), the lessons describe the roles NGOs play in positively influencing future trends in social development, natural resources management, and environmental quality. NGOs are…
Action Guide for Emergency Management at Institutions of Higher Education
ERIC Educational Resources Information Center
Office of Safe and Drug-Free Schools, US Department of Education, 2010
2010-01-01
This "Action Guide for Emergency Management at Higher Education Institutions" has been developed to give higher education institutions a useful resource in the field of emergency management. It is intended for community colleges, four-year colleges and universities, graduate schools, and research institutions associated with higher education…
Hazardous Waste and You. A Teacher's Guide.
ERIC Educational Resources Information Center
Ontario Waste Management Corp., Toronto.
This teaching guide provides an interactive introduction to hazardous waste, with particular emphasis on personal responsibility and action. Nine lessons engage advanced grade 10 and grade 11-12 science students in group discussions and actions that help them develop awareness of hazardous waste, understanding of the hazardous waste situation in…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-15
... Decommissioning of Nuclear Power Reactors AGENCY: Nuclear Regulatory Commission. ACTION: Draft regulatory guide... draft regulatory guide (DG) DG-1271 ``Decommissioning of Nuclear Power Reactors.'' This guide describes... Regulatory Guide 1.184, ``Decommissioning of Nuclear Power Reactors,'' dated July 2000. This proposed...
A Neural Basis of Facial Action Recognition in Humans
Srinivasan, Ramprakash; Golomb, Julie D.
2016-01-01
By combining different facial muscle actions, called action units, humans can produce an extraordinarily large number of facial expressions. Computational models and studies in cognitive science and social psychology have long hypothesized that the brain needs to visually interpret these action units to understand other people's actions and intentions. Surprisingly, no studies have identified the neural basis of the visual recognition of these action units. Here, using functional magnetic resonance imaging and an innovative machine learning analysis approach, we identify a consistent and differential coding of action units in the brain. Crucially, in a brain region thought to be responsible for the processing of changeable aspects of the face, multivoxel pattern analysis could decode the presence of specific action units in an image. This coding was found to be consistent across people, facilitating the estimation of the perceived action units on participants not used to train the multivoxel decoder. Furthermore, this coding of action units was identified when participants attended to the emotion category of the facial expression, suggesting an interaction between the visual analysis of action units and emotion categorization as predicted by the computational models mentioned above. These results provide the first evidence for a representation of action units in the brain and suggest a mechanism for the analysis of large numbers of facial actions and a loss of this capacity in psychopathologies. SIGNIFICANCE STATEMENT Computational models and studies in cognitive and social psychology propound that visual recognition of facial expressions requires an intermediate step to identify visible facial changes caused by the movement of specific facial muscles. Because facial expressions are indeed created by moving one's facial muscles, it is logical to assume that our visual system solves this inverse problem. Here, using an innovative machine learning method and neuroimaging data, we identify for the first time a brain region responsible for the recognition of actions associated with specific facial muscles. Furthermore, this representation is preserved across subjects. Our machine learning analysis does not require mapping the data to a standard brain and may serve as an alternative to hyperalignment. PMID:27098688
When the Penny Drops: Reframing Under Stress and Ambiguity
2012-01-01
phenomenon of "penny dropping," i.e., replacing one conceptual frame that informs understanding and guides action by another by investigating how...phenomenon of "penny dropping," i.e., replacing one conceptual frame that informs understanding and guides action by another by investigating how officers... action which proved to be inaccurate to a more valid Frame B. Consistent with this objective, the three concepts that drove the inductive analysis
ERIC Educational Resources Information Center
Stevens, J.A.
2005-01-01
Four experiments were completed to characterize the utilization of visual imagery and motor imagery during the mental representation of human action. In Experiment 1, movement time functions for a motor imagery human locomotion task conformed to a speed-accuracy trade-off similar to Fitts' Law, whereas those for a visual imagery object motion task…
ERIC Educational Resources Information Center
Humphreys, Glyn W.; Wulff, Melanie; Yoon, Eun Young; Riddoch, M. Jane
2010-01-01
Two experiments are reported that use patients with visual extinction to examine how visual attention is influenced by action information in images. In Experiment 1 patients saw images of objects that were either correctly or incorrectly colocated for action, with the objects held by hands that were congruent or incongruent with those used…
Perceptual training yields rapid improvements in visually impaired youth.
Nyquist, Jeffrey B; Lappin, Joseph S; Zhang, Ruyuan; Tadin, Duje
2016-11-30
Visual function demands coordinated responses to information over a wide field of view, involving both central and peripheral vision. Visually impaired individuals often seem to underutilize peripheral vision, even in absence of obvious peripheral deficits. Motivated by perceptual training studies with typically sighted adults, we examined the effectiveness of perceptual training in improving peripheral perception of visually impaired youth. Here, we evaluated the effectiveness of three training regimens: (1) an action video game, (2) a psychophysical task that combined attentional tracking with a spatially and temporally unpredictable motion discrimination task, and (3) a control video game. Training with both the action video game and modified attentional tracking yielded improvements in visual performance. Training effects were generally larger in the far periphery and appear to be stable 12 months after training. These results indicate that peripheral perception might be under-utilized by visually impaired youth and that this underutilization can be improved with only ~8 hours of perceptual training. Moreover, the similarity of improvements following attentional tracking and action video-game training suggest that well-documented effects of action video-game training might be due to the sustained deployment of attention to multiple dynamic targets while concurrently requiring rapid attending and perception of unpredictable events.
NASA Astrophysics Data System (ADS)
Rieder, Christian; Schwier, Michael; Weihusen, Andreas; Zidowitz, Stephan; Peitgen, Heinz-Otto
2009-02-01
Image guided radiofrequency ablation (RFA) is becoming a standard procedure as a minimally invasive method for tumor treatment in the clinical routine. The visualization of pathological tissue and potential risk structures like vessels or important organs gives essential support in image guided pre-interventional RFA planning. In this work our aim is to present novel visualization techniques for interactive RFA planning to support the physician with spatial information of pathological structures as well as the finding of trajectories without harming vitally important tissue. Furthermore, we illustrate three-dimensional applicator models of different manufactures combined with corresponding ablation areas in homogenous tissue, as specified by the manufacturers, to enhance the estimated amount of cell destruction caused by ablation. The visualization techniques are embedded in a workflow oriented application, designed for the use in the clinical routine. To allow a high-quality volume rendering we integrated a visualization method using the fuzzy c-means algorithm. This method automatically defines a transfer function for volume visualization of vessels without the need of a segmentation mask. However, insufficient visualization results of the displayed vessels caused by low data quality can be improved using local vessel segmentation in the vicinity of the lesion. We also provide an interactive segmentation technique of liver tumors for the volumetric measurement and for the visualization of pathological tissue combined with anatomical structures. In order to support coagulation estimation with respect to the heat-sink effect of the cooling blood flow which decreases thermal ablation, a numerical simulation of the heat distribution is provided.
Beyond the cockpit: The visual world as a flight instrument
NASA Technical Reports Server (NTRS)
Johnson, W. W.; Kaiser, M. K.; Foyle, D. C.
1992-01-01
The use of cockpit instruments to guide flight control is not always an option (e.g., low level rotorcraft flight). Under such circumstances the pilot must use out-the-window information for control and navigation. Thus it is important to determine the basis of visually guided flight for several reasons: (1) to guide the design and construction of the visual displays used in training simulators; (2) to allow modeling of visibility restrictions brought about by weather, cockpit constraints, or distortions introduced by sensor systems; and (3) to aid in the development of displays that augment the cockpit window scene and are compatible with the pilot's visual extraction of information from the visual scene. The authors are actively pursuing these questions. We have on-going studies using both low-cost, lower fidelity flight simulators, and state-of-the-art helicopter simulation research facilities. Research results will be presented on: (1) the important visual scene information used in altitude and speed control; (2) the utility of monocular, stereo, and hyperstereo cues for the control of flight; (3) perceptual effects due to the differences between normal unaided daylight vision, and that made available by various night vision devices (e.g., light intensifying goggles and infra-red sensor displays); and (4) the utility of advanced contact displays in which instrument information is made part of the visual scene, as on a 'scene linked' head-up display (e.g., displaying altimeter information on a virtual billboard located on the ground).
Eye movements reveal epistemic curiosity in human observers.
Baranes, Adrien; Oudeyer, Pierre-Yves; Gottlieb, Jacqueline
2015-12-01
Saccadic (rapid) eye movements are primary means by which humans and non-human primates sample visual information. However, while saccadic decisions are intensively investigated in instrumental contexts where saccades guide subsequent actions, it is largely unknown how they may be influenced by curiosity - the intrinsic desire to learn. While saccades are sensitive to visual novelty and visual surprise, no study has examined their relation to epistemic curiosity - interest in symbolic, semantic information. To investigate this question, we tracked the eye movements of human observers while they read trivia questions and, after a brief delay, were visually given the answer. We show that higher curiosity was associated with earlier anticipatory orienting of gaze toward the answer location without changes in other metrics of saccades or fixations, and that these influences were distinct from those produced by variations in confidence and surprise. Across subjects, the enhancement of anticipatory gaze was correlated with measures of trait curiosity from personality questionnaires. Finally, a machine learning algorithm could predict curiosity in a cross-subject manner, relying primarily on statistical features of the gaze position before the answer onset and independently of covariations in confidence or surprise, suggesting potential practical applications for educational technologies, recommender systems and research in cognitive sciences. With this article, we provide full access to the annotated database allowing readers to reproduce the results. Epistemic curiosity produces specific effects on oculomotor anticipation that can be used to read out curiosity states. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration.
Ikumi, Nara; Soto-Faraco, Salvador
2016-01-01
Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands.
Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration
Ikumi, Nara; Soto-Faraco, Salvador
2017-01-01
Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands. PMID:28154529
Kopiske, Karl K; Bruno, Nicola; Hesse, Constanze; Schenk, Thomas; Franz, Volker H
2016-06-01
It has often been suggested that visual illusions affect perception but not actions such as grasping, as predicted by the "two-visual-systems" hypothesis of Milner and Goodale (1995, The Visual Brain in Action, Oxford University press). However, at least for the Ebbinghaus illusion, relevant studies seem to reveal a consistent illusion effect on grasping (Franz & Gegenfurtner, 2008. Grasping visual illusions: consistent data and no dissociation. Cognitive Neuropsychology). Two interpretations are possible: either grasping is not immune to illusions (arguing against dissociable processing mechanisms for vision-for-perception and vision-for-action), or some other factors modulate grasping in ways that mimic a vision-for perception effect in actions. It has been suggested that one such factor may be obstacle avoidance (Haffenden Schiff & Goodale, 2001. The dissociation between perception and action in the Ebbinghaus illusion: nonillusory effects of pictorial cues on grasp. Current Biology, 11, 177-181). In four different labs (total N = 144), we conducted an exact replication of previous studies suggesting obstacle avoidance mechanisms, implementing conditions that tested grasping as well as multiple perceptual tasks. This replication was supplemented by additional conditions to obtain more conclusive results. Our results confirm that grasping is affected by the Ebbinghaus illusion and demonstrate that this effect cannot be explained by obstacle avoidance. Copyright © 2016 Elsevier Ltd. All rights reserved.
Simulators for training in ultrasound guided procedures.
Farjad Sultan, Syed; Shorten, George; Iohom, Gabrielle
2013-06-01
The four major categories of skill sets associated with proficiency in ultrasound guided regional anaesthesia are 1) understanding device operations, 2) image optimization, 3) image interpretation and 4) visualization of needle insertion and injection of the local anesthetic solution. Of these, visualization of needle insertion and injection of local anaesthetic solution can be practiced using simulators and phantoms. This survey of existing simulators summarizes advantages and disadvantages of each. Current deficits pertain to the validation process.
Wavefront-Guided Scleral Lens Prosthetic Device for Keratoconus
Sabesan, Ramkumar; Johns, Lynette; Tomashevskaya, Olga; Jacobs, Deborah S.; Rosenthal, Perry; Yoon, Geunyoung
2016-01-01
Purpose To investigate the feasibility of correcting ocular higher order aberrations (HOA) in keratoconus (KC) using wavefront-guided optics in a scleral lens prosthetic device (SLPD). Methods Six advanced keratoconus patients (11 eyes) were fitted with a SLPD with conventional spherical optics. A custom-made Shack-Hartmann wavefront sensor was used to measure aberrations through a dilated pupil wearing the SLPD. The position of SLPD, i.e. horizontal and vertical decentration relative to the pupil and rotation were measured and incorporated into the design of the wavefront-guided optics for the customized SLPD. A submicron-precision lathe created the designed irregular profile on the front surface of the device. The residual aberrations of the same eyes wearing the SLPD with wavefront-guided optics were subsequently measured. Visual performance with natural mesopic pupil was compared between SLPDs having conventional spherical and wavefront-guided optics by measuring best-corrected high-contrast visual acuity and contrast sensitivity. Results Root-mean-square of HOA(RMS) in the 11 eyes wearing conventional SLPD with spherical optics was 1.17±0.57μm for a 6 mm pupil. HOA were effectively corrected by the customized SLPD with wavefront-guided optics and RMS was reduced 3.1 times on average to 0.37±0.19μm for the same pupil. This correction resulted in significant improvement of 1.9 lines in mean visual acuity (p<0.05). Contrast sensitivity was also significantly improved by a factor of 2.4, 1.8 and 1.4 on average for 4, 8 and 12 cycles/degree, respectively (p<0.05 for all frequencies). Although the residual aberration was comparable to that of normal eyes, the average visual acuity in logMAR with the customized SLPD was 0.21, substantially worse than normal acuity. Conclusions The customized SLPD with wavefront-guided optics corrected the HOA of advanced KC patients to normal levels and improved their vision significantly. PMID:23478630
Real-world visual search is dominated by top-down guidance.
Chen, Xin; Zelinsky, Gregory J
2006-11-01
How do bottom-up and top-down guidance signals combine to guide search behavior? Observers searched for a target either with or without a preview (top-down manipulation) or a color singleton (bottom-up manipulation) among the display objects. With a preview, reaction times were faster and more initial eye movements were guided to the target; the singleton failed to attract initial saccades under these conditions. Only in the absence of a preview did subjects preferentially fixate the color singleton. We conclude that the search for realistic objects is guided primarily by top-down control. Implications for saliency map models of visual search are discussed.
Shankar, S; Ellard, C
2000-02-01
Past research has indicated that many species use the time-to-collision variable but little is known about its neural underpinnings in rodents. In a set of three experiments we set out to replicate and extend the findings of Sun et al. (Sun H-J, Carey DP, Goodale MA. Exp Brain Res 1992;91:171-175) in a visually guided task in Mongolian gerbils, and then investigated the effects of lesions to different cortical areas. We trained Mongolian gerbils to run in the dark toward a target on a computer screen. In some trials the target changed in size as the animal ran toward it in such a way as to produce 'virtual targets' if the animals were using time-to-collision or contact information. In experiment 1 we confirmed that gerbils use time-to-contact information to modulate their speed of running toward a target. In experiment 2 we established that visual cortex lesions attenuate the ability of lesioned animals to use information from the visual target to guide their run, while frontal cortex lesioned animals are not as severely affected. In experiment 3 we found that small radio-frequency lesions, of either area VI or of the lateral extrastriate regions of the visual cortex also affected the use of information from the target to modulate locomotion.
Mechnical Drawing/Drafting Curriculum Guide.
ERIC Educational Resources Information Center
Gregory, Margaret R.; Benson, Robert T.
This curriculum guide consists of materials for teaching a course in mechanical drawing and drafting. Addressed in the individual units of the guide are the following topics: the nature and scope of drawing and drafting, visualization and spatial relationships, drafting tools and materials, linework, freehand lettering, geometric construction,…
Aviation & Space Education: A Teacher's Resource Guide.
ERIC Educational Resources Information Center
Texas State Dept. of Aviation, Austin.
This resource guide contains information on curriculum guides, resources for teachers, computer software and computer related programs, audio/visual presentations, model aircraft and demonstration aids, training seminars and career education, and an aerospace bibliography for primary grades. Each entry includes all or some of the following items:…
75 FR 3760 - Draft Regulatory Guide: Issuance, Availability
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-22
... NUCLEAR REGULATORY COMMISSION [NRC-2010-0018] Draft Regulatory Guide: Issuance, Availability AGENCY: Nuclear Regulatory Commission. ACTION: Notice of Issuance and Availability of Draft Regulatory...) is issuing for public comment a draft guide in the agency's ``Regulatory Guide'' series. This series...
75 FR 20645 - Draft Regulatory Guide: Issuance, Availability
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-20
... NUCLEAR REGULATORY COMMISSION [NRC-2010-0158] Draft Regulatory Guide: Issuance, Availability AGENCY: Nuclear Regulatory Commission. ACTION: Notice of Issuance and Availability of Draft Regulatory... draft guide in the agency's ``Regulatory Guide'' series. This series was developed to describe and make...
76 FR 189 - Final Regulatory Guide: Issuance, Availability
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-03
.../reading-rm/doc-collections/ . In addition, regulatory guides are available for inspection at the NRC's... NUCLEAR REGULATORY COMMISSION [NRC-2010-0265] Final Regulatory Guide: Issuance, Availability AGENCY: Nuclear Regulatory Commission. ACTION: Notice of Issuance and Availability of Regulatory Guide 3...
De Vito, David; Fenske, Mark J
2017-05-01
Potentially distracting or otherwise-inappropriate stimuli, thoughts, or actions often must be inhibited to prevent interference with goal-directed behaviour. Growing evidence suggests that the impact of inhibition is not limited to reduced neurocognitive processing, but also includes negative affective consequences for any associated stimuli. The link between inhibition and aversive response has primarily been studied using tasks involving attentional- or response-related inhibition of external sensory stimuli. Here we show that affective devaluation also occurs when inhibition is applied to fully-encoded stimulus representations in memory. We first replicated prior findings of increased forgetting of words whose memories were suppressed in a Think/No-think procedure (Experiment 1). Incorporating a stimulus-evaluation task within this procedure revealed that suppressing memories of words (Experiment 2) and visual objects (Experiment 3) also results in their affective devaluation. Given the critical role of memory for guiding thoughts and actions, these results suggest that the affective consequences of inhibition may occur across a far broader range of situations than previously understood. Copyright © 2017 Elsevier B.V. All rights reserved.
The experience of time in habitual teenage marijuana smokers.
Dörr, Anneliese; Espinoza, Adriana; Acevedo, Jorge
2014-01-01
The research is qualitative; it studies the experience of time in young people who smoke marijuana in excess, given the high rate of smoking in the teenage years, a delicate stage regarding the planning of the future. Our objective is to see how the relationship between past and future plans is manifested in their biography, through goals and actions, in light of their ability to anticipate themselves. Our guiding principle is the ability to “anticipate oneself”, proposed by Sutter, a phenomenological psychiatrist. The information was obtained from the analysis of autobiographies of young persons through the hermeneutical phenomenological method developed by Lindseth, based on Ricoeur. The results reveal that in the biographies the past temporal dimension is characterized by poor descriptions, the present is where they extend themselves most, describing tastes, how they visualize themselves, but showing a lack of clarity in their interests. In the future we see the absence of reference, giving the impression of no progression from the past, and without awareness of the fact that the future possibilities or lack thereof are heavily dependent on present actions.
Direct Evidence for the Economy of Action: Glucose and the Perception of Geographical Slant
Schnall, Simone; Zadra, Jonathan R.; Proffitt, Dennis R.
2012-01-01
When locomoting in a physically challenging environment, the body draws upon available energy reserves to accommodate increased metabolic demand. Ingested glucose supplements the body’s energy resources, whereas non-caloric sweetener does not. Two experiments demonstrate that participants who had consumed a glucose-containing drink perceived a hills slant to be less steep than did participants who had consumed a drink containing non-caloric sweetener. The glucose manipulation influenced participants’ explicit awareness of hill slant but, as predicted, it did not affect a visually-guided action of orienting a tilting palmboard to be parallel to the hill. Measured individual differences in factors related to bioenergetic state such as fatigue, sleep quality, fitness, mood, and stress also affected perception such that lower energetic states were associated with steeper perceptions of hill slant. This research shows that the perception of the environment’s spatial layout is influenced by the energetic resources available for locomotion within it. Our findings are consistent with the view that spatial perceptions are influenced by bioenergetic factors. PMID:20514996
ERIC Educational Resources Information Center
Hunnius, Sabine; Bekkering, Harold
2010-01-01
This study examined the developing object knowledge of infants through their visual anticipation of action targets during action observation. Infants (6, 8, 12, 14, and 16 months) and adults watched short movies of a person using 3 different everyday objects. Participants were presented with objects being brought either to a correct or to an…
High contrast sensitivity for visually guided flight control in bumblebees.
Chakravarthi, Aravin; Kelber, Almut; Baird, Emily; Dacke, Marie
2017-12-01
Many insects rely on vision to find food, to return to their nest and to carefully control their flight between these two locations. The amount of information available to support these tasks is, in part, dictated by the spatial resolution and contrast sensitivity of their visual systems. Here, we investigate the absolute limits of these visual properties for visually guided position and speed control in Bombus terrestris. Our results indicate that the limit of spatial vision in the translational motion detection system of B. terrestris lies at 0.21 cycles deg -1 with a peak contrast sensitivity of at least 33. In the perspective of earlier findings, these results indicate that bumblebees have higher contrast sensitivity in the motion detection system underlying position control than in their object discrimination system. This suggests that bumblebees, and most likely also other insects, have different visual thresholds depending on the behavioral context.
Barth, Rolf F; Kellough, David A; Allenby, Patricia; Blower, Luke E; Hammond, Scott H; Allenby, Greg M; Buja, L Maximilian
Determination of the degree of stenosis of atherosclerotic coronary arteries is an important part of postmortem examination of the heart, but, unfortunately, estimation of the degree of luminal narrowing can be imprecise and tends to be approximations. Visual guides can be useful to assess this, but earlier attempts to develop such guides did not employ digital technology. Using this approach, we have developed two computer-generated morphometric guides to estimate the degree of luminal narrowing of atherosclerotic coronary arteries. The first is based on symmetric or eccentric circular or crescentic narrowing of the vessel lumen and the second on either slit-like or irregularly shaped narrowing of the vessel lumens. Using the Aperio ScanScope XT at a magnification of 20× we created digital whole-slide images of 20 representative microscopic cross sections of the left anterior descending (LAD) coronary artery, stained with either hematoxylin and eosin (H&E) or Movat's pentachrome stain. These cross sections illustrated a variety of luminal profiles and degrees of stenosis. Three representative types of images were selected and a visual guide was constructed with Adobe Photoshop CS5. Using the "Scale" and "Measurement" tools, we created a series of representations of stenosis with luminal cross sections depicting 20%, 40%, 60%, 70%, 80%, and 90% occlusion of the LAD branch. Four pathologists independently reviewed and scored the degree of atherosclerotic luminal narrowing based on our visual guides. In addition, digital technology was employed to determine the degree of narrowing by measuring the cross-sectional area of the 20 microscopic sections of the vessels, first assuming no narrowing and then comparing this to the percent of narrowing determined by precise measurement. Two of the observers were very experienced general autopsy pathologists, one was a first-year pathology resident on his first rotation on the autopsy service, and the fourth observer was a highly experienced cardiovascular pathologist. Interobserver reliability was assessed by determination of the intraclass correlation coefficient. The degrees of agreement for two H&E- and Movat-stained sections of the LADs from each of 10 decedents were 0.874 and 0.899, respectively, indicating strong interobserver agreement. On the average, the mean visual scores were ~8% less than the morphometric assessment (52.7 vs. 60.2), respectively. The visual guides that we have generated for scoring atherosclerotic luminal narrowing of coronary arteries should be helpful for a broad group of pathologists, from beginning pathology residents to experienced cardiovascular pathologists. Copyright © 2017 Elsevier Inc. All rights reserved.
Review of fluorescence guided surgery visualization and overlay techniques
Elliott, Jonathan T.; Dsouza, Alisha V.; Davis, Scott C.; Olson, Jonathan D.; Paulsen, Keith D.; Roberts, David W.; Pogue, Brian W.
2015-01-01
In fluorescence guided surgery, data visualization represents a critical step between signal capture and display needed for clinical decisions informed by that signal. The diversity of methods for displaying surgical images are reviewed, and a particular focus is placed on electronically detected and visualized signals, as required for near-infrared or low concentration tracers. Factors driving the choices such as human perception, the need for rapid decision making in a surgical environment, and biases induced by display choices are outlined. Five practical suggestions are outlined for optimal display orientation, color map, transparency/alpha function, dynamic range compression, and color perception check. PMID:26504628
NASA Technical Reports Server (NTRS)
Hess, Bernhard J M.; Angelaki, Dora E.
2003-01-01
Rotational disturbances of the head about an off-vertical yaw axis induce a complex vestibuloocular reflex pattern that reflects the brain's estimate of head angular velocity as well as its estimate of instantaneous head orientation (at a reduced scale) in space coordinates. We show that semicircular canal and otolith inputs modulate torsional and, to a certain extent, also vertical ocular orientation of visually guided saccades and smooth-pursuit eye movements in a similar manner as during off-vertical axis rotations in complete darkness. It is suggested that this graviceptive control of eye orientation facilitates rapid visual spatial orientation during motion.
A guide to the visual analysis and communication of biomolecular structural data.
Johnson, Graham T; Hertig, Samuel
2014-10-01
Biologists regularly face an increasingly difficult task - to effectively communicate bigger and more complex structural data using an ever-expanding suite of visualization tools. Whether presenting results to peers or educating an outreach audience, a scientist can achieve maximal impact with minimal production time by systematically identifying an audience's needs, planning solutions from a variety of visual communication techniques and then applying the most appropriate software tools. A guide to available resources that range from software tools to professional illustrators can help researchers to generate better figures and presentations tailored to any audience's needs, and enable artistically inclined scientists to create captivating outreach imagery.
Lossnitzer, Dirk; Seitz, Sebastian A; Krautz, Birgit; Schnackenburg, Bernhard; André, Florian; Korosoglou, Grigorios; Katus, Hugo A; Steen, Henning
2015-07-26
To investigate if magnetic resonance (MR)-guided biopsy can improve the performance and safety of such procedures. A novel MR-compatible bioptome was evaluated in a series of in-vitro experiments in a 1.5T magnetic resonance imaging (MRI) system. The bioptome was inserted into explanted porcine and bovine hearts under real-time MR-guidance employing a steady state free precession sequence. The artifact produced by the metal element at the tip and the signal voids caused by the bioptome were visually tracked for navigation and allowed its constant and precise localization. Cardiac structural elements and the target regions for the biopsy were clearly visible. Our method allowed a significantly better spatial visualization of the bioptoms tip compared to conventional X-ray guidance. The specific device design of the bioptome avoided inducible currents and therefore subsequent heating. The novel MR-compatible bioptome provided a superior cardiovascular magnetic resonance (imaging) soft-tissue visualization for MR-guided myocardial biopsies. Not at least the use of MRI guidance for endomyocardial biopsies completely avoided radiation exposure for both patients and interventionalists. MRI-guided endomyocardial biopsies provide a better than conventional X-ray guided navigation and could therefore improve the specificity and reproducibility of cardiac biopsies in future studies.
Gallivan, Jason P.; Johnsrude, Ingrid S.; Randall Flanagan, J.
2016-01-01
Object-manipulation tasks (e.g., drinking from a cup) typically involve sequencing together a series of distinct motor acts (e.g., reaching toward, grasping, lifting, and transporting the cup) in order to accomplish some overarching goal (e.g., quenching thirst). Although several studies in humans have investigated the neural mechanisms supporting the planning of visually guided movements directed toward objects (such as reaching or pointing), only a handful have examined how manipulatory sequences of actions—those that occur after an object has been grasped—are planned and represented in the brain. Here, using event-related functional MRI and pattern decoding methods, we investigated the neural basis of real-object manipulation using a delayed-movement task in which participants first prepared and then executed different object-directed action sequences that varied either in their complexity or final spatial goals. Consistent with previous reports of preparatory brain activity in non-human primates, we found that activity patterns in several frontoparietal areas reliably predicted entire action sequences in advance of movement. Notably, we found that similar sequence-related information could also be decoded from pre-movement signals in object- and body-selective occipitotemporal cortex (OTC). These findings suggest that both frontoparietal and occipitotemporal circuits are engaged in transforming object-related information into complex, goal-directed movements. PMID:25576538
Hernik, Mikolaj; Fearon, Pasco; Csibra, Gergely
2014-04-22
Animal actions are almost universally constrained by the bilateral body-plan. For example, the direction of travel tends to be constrained by the orientation of the animal's anteroposterior axis. Hence, an animal's behaviour can reliably guide the identification of its front and back, and its orientation can reliably guide action prediction. We examine the hypothesis that the evolutionarily ancient relation between anteroposterior body-structure and behaviour guides our cognitive processing of agents and their actions. In a series of studies, we demonstrate that, after limited exposure, human infants as young as six months of age spontaneously encode a novel agent as having a certain axial direction with respect to its actions and rely on it when anticipating the agent's further behaviour. We found that such encoding is restricted to objects exhibiting cues of agency and does not depend on generalization from features of familiar animals. Our research offers a new tool for investigating the perception of animate agency and supports the proposal that the underlying cognitive mechanisms have been shaped by basic biological adaptations in humans.
Kimura, Takeshi; Shiomi, Hiroki; Kuribayashi, Sachio; Isshiki, Takaaki; Kanazawa, Susumu; Ito, Hiroshi; Ikeda, Shunya; Forrest, Ben; Zarins, Christopher K; Hlatky, Mark A; Norgaard, Bjarne L
2015-01-01
Percutaneous coronary intervention (PCI) based on fractional flow reserve (FFRcath) measurement during invasive coronary angiography (CAG) results in improved patient outcome and reduced healthcare costs. FFR can now be computed non-invasively from standard coronary CT angiography (cCTA) scans (FFRCT). The purpose of this study is to determine the potential impact of non-invasive FFRCT on costs and clinical outcomes of patients with suspected coronary artery disease in Japan. Clinical data from 254 patients in the HeartFlowNXT trial, costs of goods and services in Japan, and clinical outcome data from the literature were used to estimate the costs and outcomes of 4 clinical pathways: (1) CAG-visual guided PCI, (2) CAG-FFRcath guided PCI, (3) cCTA followed by CAG-visual guided PCI, (4) cCTA-FFRCT guided PCI. The CAG-visual strategy demonstrated the highest projected cost ($10,360) and highest projected 1-year death/myocardial infarction rate (2.4 %). An assumed price for FFRCT of US $2,000 produced equivalent clinical outcomes (death/MI rate: 1.9 %) and healthcare costs ($7,222) for the cCTA-FFRCT strategy and the CAG-FFRcath guided PCI strategy. Use of the cCTA-FFRCT strategy to select patients for PCI would result in 32 % lower costs and 19 % fewer cardiac events at 1 year compared to the most commonly used CAG-visual strategy. Use of cCTA-FFRCT to select patients for CAG and PCI may reduce costs and improve clinical outcome in patients with suspected coronary artery disease in Japan.
Conservation in a World of Six Billion: A Grassroots Action Guide.
ERIC Educational Resources Information Center
Hren, Benedict J.
This grassroots action guide features a conservation initiative working to bring the impacts of human population growth, economic development, and natural resource consumption into balance with the limits of nature for the benefit of current and future generations. Contents include information sheets entitled "Six Billion People and Growing,""The…
Mentoring Graduate Students through the Action Research Journey Using Guiding Principles
ERIC Educational Resources Information Center
Spencer, Joi A.; Molina, Sarina Chugani
2018-01-01
Our department has adopted action research (AR) projects as the culminating task for our master's degree candidates. This article presents our work on mentoring graduate students towards the completion of their final AR research projects and details the deliberate structures put in place to guide them through the AR process. These structures…
Forum Guide to Taking Action with Education Data. NFES 2013-801
ERIC Educational Resources Information Center
National Forum on Education Statistics, 2012
2012-01-01
Education data are growing in quantity, quality, and value. When appropriately used to guide action, data can be a powerful tool for improving school operations, teaching, and learning. Education stakeholders who possess the knowledge, skills, and abilities to appropriately access, analyze, and interpret data will be able to use data to take…
Ohio Vocational Consumer/Homemaking Curriculum Guide. Practical Action.
ERIC Educational Resources Information Center
Ohio State Univ., Columbus. Instructional Materials Lab.
This curriculum guide helps students learn the technical skills of the occupation of homemaking. It also uses the process model of practical reasoning to assist men and women in taking action regarding the perennial problems that face individuals and families living in the world society. The first section provides the philosophy, aim, student…
75 FR 66769 - Draft Compliance Policy Guide Sec. 690.800 Salmonella
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-29
...] Draft Compliance Policy Guide Sec. 690.800 Salmonella in Animal Feed; Availability; Extension of Comment... that are adulterated due to the presence of Salmonella. The Agency is taking this action in response to... action against animal feed or feed ingredients that are adulterated due to the presence of Salmonella...
Pennsylvania Youth in Action: 4-H Community Development. Adult Leader's Guide.
ERIC Educational Resources Information Center
Pennsylvania State Univ., University Park. Dept. of Agricultural and Extension Education.
Designed to assist leaders in their roles as catalysts, advisors, and resource persons for the Pennsylvania Youth in Action 4-H Community Development program, the guide provides complementary educational, craft, and recreation suggestions to enhance student workbooks for three community development activity units. The first section focuses on the…
Introduction to the MCS. Visual Media Learning Guide.
ERIC Educational Resources Information Center
Spokane Falls Community Coll., WA.
This student learning guide is designed to introduce graphics arts students t the MCS (Modular Composition System) compugraphic typesetting system. Addressed in the individual units of the competency-based guide are the following tasks: programming the compugraphic typesetting system, creating a new file and editing a file, operating a…
Graphic Design Career Guide 2. Revised Edition.
ERIC Educational Resources Information Center
Craig, James
The graphic design field is diverse and includes many areas of specialization. This guide introduces students to career opportunities in graphic design. The guide is organized in four parts. "Part One: Careers in Graphic Design" identifies and discusses the various segments of the graphic design industry, including: Advertising, Audio-Visual, Book…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wybranski, Christian, E-mail: Christian.Wybranski@uk-koeln.de; Pech, Maciej; Lux, Anke
ObjectiveTo assess the feasibility of a hybrid approach employing MRI-guided bile duct (BD) puncture for subsequent fluoroscopy-guided biliary interventions in patients with non-dilated (≤3 mm) or dilated BD (≥3 mm) but unfavorable conditions for ultrasonography (US)-guided BD puncture.MethodsA total of 23 hybrid interventions were performed in 21 patients. Visualization of BD and puncture needles (PN) in the interventional MR images was rated on a 5-point Likert scale by two radiologists. Technical success, planning time, BD puncture time and positioning adjustments of the PN as well as technical success of the biliary intervention and complication rate were recorded.ResultsVisualization even of third-order non-dilated BDmore » and PN was rated excellent by both radiologists with good to excellent interrater agreement. MRI-guided BD puncture was successful in all cases. Planning and BD puncture times were 1:36 ± 2.13 (0:16–11:07) min. and 3:58 ± 2:35 (1:11–9:32) min. Positioning adjustments of the PN was necessary in two patients. Repeated capsular puncture was not necessary in any case. All biliary interventions were completed successfully without major complications.ConclusionA hybrid approach which employs MRI-guided BD puncture for subsequent fluoroscopy-guided biliary intervention is feasible in clinical routine and yields high technical success in patients with non-dilated BD and/or unfavorable conditions for US-guided puncture. Excellent visualization of BD and PN in near-real-time interventional MRI allows successful cannulation of the BD.« less
Fadlallah, Ali; Dirani, Ali; Chelala, Elias; Antonios, Rafic; Cherfan, George; Jarade, Elias
2014-10-01
To evaluate the safety and clinical outcome of combined non-topography-guided photorefractive keratectomy (PRK) and corneal collagen cross-linking (CXL) for the treatment of mild refractive errors in patients with early stage keratoconus. A retrospective, nonrandomized study of patients with early stage keratoconus (stage 1 or 2) who underwent simultaneous non-topography-guided PRK and CXL. All patients had at least 2 years of follow-up. Data were collected preoperatively and postoperatively at the 6-month, 1-year, and 2-year follow-up visit after combined non-topography-guided PRK and CXL. Seventy-nine patients (140 eyes) were included in the study. Combined non-topography-guided PRK and CXL induced a significant improvement in both visual acuity and refraction. Uncorrected distance visual acuity significantly improved from 0.39 ± 0.22 logMAR before combined non-topography-guided PRK and CXL to 0.12 ± 0.14 logMAR at the last follow-up visit (P <.001) and corrected distance visual acuity remained stable (0.035 ± 0.062 logMAR preoperatively vs 0.036 ± 0.058 logMAR postoperatively, P =.79). The mean spherical equivalent decreased from -1.78 ± 1.43 to -0.42 ± 0.60 diopters (D) (P <.001), and the mean cylinder decreased from 1.47 ± 1.10 to 0.83 ± 0.55 D (P <.001). At the last follow-up visit mean keratometry flat was 43.30 ± 1.75 vs 45.62 ± 1.72 D preoperatively (P = .03) and mean keratometry steep was 44.39 ± 3.14 vs 46.53 ± 2.13 D preoperatively (P = .02). Mean central corneal thickness decreased from 501.74 ± 13.11 to 475.93 ± 12.25 µm following combined non-topography-guided PRK and CXL (P < .001). No intraoperative complications occurred. Four eyes developed mild haze that responded well to a short course of topical steroids. No eye developed infectious keratitis. Combined non-topography-guided PRK and CXL is an effective and safe option for correcting mild refractive error and improving visual acuity in patients with early stable keratoconus. Copyright 2014, SLACK Incorporated.
Impaired visually guided weight-shifting ability in children with cerebral palsy.
Ballaz, Laurent; Robert, Maxime; Parent, Audrey; Prince, François; Lemay, Martin
2014-09-01
The ability to control voluntary weight shifting is crucial in many functional tasks. To our knowledge, weight shifting ability in response to a visual stimulus has never been evaluated in children with cerebral palsy (CP). The aim of the study was (1) to propose a new method to assess visually guided medio-lateral (M/L) weight shifting ability and (2) to compare weight-shifting ability in children with CP and typically developing (TD) children. Ten children with spastic diplegic CP (Gross Motor Function Classification System level I and II; age 7-12 years) and 10 TD age-matched children were tested. Participants played with the skiing game on the Wii Fit game console. Center of pressure (COP) displacements, trunk and lower-limb movements were recorded during the last virtual slalom. Maximal isometric lower limb strength and postural control during quiet standing were also assessed. Lower-limb muscle strength was reduced in children with CP compared to TD children and postural control during quiet standing was impaired in children with CP. As expected, the skiing game mainly resulted in M/L COP displacements. Children with CP showed lower M/L COP range and velocity as compared to TD children but larger trunk movements. Trunk and lower extremity movements were less in phase in children with CP compared to TD children. Commercially available active video games can be used to assess visually guided weight shifting ability. Children with spastic diplegic CP showed impaired visually guided weight shifting which can be explained by non-optimal coordination of postural movement and reduced muscular strength. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Ho-Hsing; Wu, Jay; Chuang, Keh-Shih; Kuo, Hsiang-Chi
2007-07-01
Intensity-modulated radiation therapy (IMRT) utilizes nonuniform beam profile to deliver precise radiation doses to a tumor while minimizing radiation exposure to surrounding normal tissues. However, the problem of intrafraction organ motion distorts the dose distribution and leads to significant dosimetric errors. In this research, we applied an aperture adaptive technique with a visual guiding system to toggle the problem of respiratory motion. A homemade computer program showing a cyclic moving pattern was projected onto the ceiling to visually help patients adjust their respiratory patterns. Once the respiratory motion becomes regular, the leaf sequence can be synchronized with the target motion. An oscillator was employed to simulate the patient's breathing pattern. Two simple fields and one IMRT field were measured to verify the accuracy. Preliminary results showed that after appropriate training, the amplitude and duration of volunteer's breathing can be well controlled by the visual guiding system. The sharp dose gradient at the edge of the radiation fields was successfully restored. The maximum dosimetric error in the IMRT field was significantly decreased from 63% to 3%. We conclude that the aperture adaptive technique with the visual guiding system can be an inexpensive and feasible alternative without compromising delivery efficiency in clinical practice.
Eye movements, visual search and scene memory, in an immersive virtual environment.
Kit, Dmitry; Katz, Leor; Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary
2014-01-01
Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency.
Scherman Rydhög, Jonas; Riisgaard de Blanck, Steen; Josipovic, Mirjana; Irming Jølck, Rasmus; Larsen, Klaus Richter; Clementsen, Paul; Lars Andersen, Thomas; Poulsen, Per Rugaard; Fredberg Persson, Gitte; Munck Af Rosenschold, Per
2017-04-01
The purpose of this study was to estimate the uncertainty in voluntary deep-inspiration breath-hold (DIBH) radiotherapy for locally advanced non-small cell lung cancer (NSCLC) patients. Perpendicular fluoroscopic movies were acquired in free breathing (FB) and DIBH during a course of visually guided DIBH radiotherapy of nine patients with NSCLC. Patients had liquid markers injected in mediastinal lymph nodes and primary tumours. Excursion, systematic- and random errors, and inter-breath-hold position uncertainty were investigated using an image based tracking algorithm. A mean reduction of 2-6mm in marker excursion in DIBH versus FB was seen in the anterior-posterior (AP), left-right (LR) and cranio-caudal (CC) directions. Lymph node motion during DIBH originated from cardiac motion. The systematic- (standard deviation (SD) of all the mean marker positions) and random errors (root-mean-square of the intra-BH SD) during DIBH were 0.5 and 0.3mm (AP), 0.5 and 0.3mm (LR), 0.8 and 0.4mm (CC), respectively. The mean inter-breath-hold shifts were -0.3mm (AP), -0.2mm (LR), and -0.2mm (CC). Intra- and inter-breath-hold uncertainty of tumours and lymph nodes were small in visually guided breath-hold radiotherapy of NSCLC. Target motion could be substantially reduced, but not eliminated, using visually guided DIBH. Copyright © 2017 Elsevier B.V. All rights reserved.
Motion-guided attention promotes adaptive communications during social navigation.
Lemasson, B H; Anderson, J J; Goodwin, R A
2013-03-07
Animals are capable of enhanced decision making through cooperation, whereby accurate decisions can occur quickly through decentralized consensus. These interactions often depend upon reliable social cues, which can result in highly coordinated activities in uncertain environments. Yet information within a crowd may be lost in translation, generating confusion and enhancing individual risk. As quantitative data detailing animal social interactions accumulate, the mechanisms enabling individuals to rapidly and accurately process competing social cues remain unresolved. Here, we model how motion-guided attention influences the exchange of visual information during social navigation. We also compare the performance of this mechanism to the hypothesis that robust social coordination requires individuals to numerically limit their attention to a set of n-nearest neighbours. While we find that such numerically limited attention does not generate robust social navigation across ecological contexts, several notable qualities arise from selective attention to motion cues. First, individuals can instantly become a local information hub when startled into action, without requiring changes in neighbour attention level. Second, individuals can circumvent speed-accuracy trade-offs by tuning their motion thresholds. In turn, these properties enable groups to collectively dampen or amplify social information. Lastly, the minority required to sway a group's short-term directional decisions can change substantially with social context. Our findings suggest that motion-guided attention is a fundamental and efficient mechanism underlying collaborative decision making during social navigation.
Liu, Baolin; Wu, Guangning; Wang, Zhongning; Ji, Xiang
2011-07-01
In the real world, some of the auditory and visual information received by the human brain are temporally asynchronous. How is such information integrated in cognitive processing in the brain? In this paper, we aimed to study the semantic integration of differently asynchronous audio-visual information in cognitive processing using ERP (event-related potential) method. Subjects were presented with videos of real world events, in which the auditory and visual information are temporally asynchronous. When the critical action was prior to the sound, sounds incongruous with the preceding critical actions elicited a N400 effect when compared to congruous condition. This result demonstrates that semantic contextual integration indexed by N400 also applies to cognitive processing of multisensory information. In addition, the N400 effect is early in latency when contrasted with other visually induced N400 studies. It is shown that cross modal information is facilitated in time when contrasted with visual information in isolation. When the sound was prior to the critical action, a larger late positive wave was observed under the incongruous condition compared to congruous condition. P600 might represent a reanalysis process, in which the mismatch between the critical action and the preceding sound was evaluated. It is shown that environmental sound may affect the cognitive processing of a visual event. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Vangeneugden, Joris; Pollick, Frank; Vogels, Rufin
2009-03-01
Neurons in the rostral superior temporal sulcus (STS) are responsive to displays of body movements. We employed a parametric action space to determine how similarities among actions are represented by visual temporal neurons and how form and motion information contributes to their responses. The stimulus space consisted of a stick-plus-point-light figure performing arm actions and their blends. Multidimensional scaling showed that the responses of temporal neurons represented the ordinal similarity between these actions. Further tests distinguished neurons responding equally strongly to static presentations and to actions ("snapshot" neurons), from those responding much less strongly to static presentations, but responding well when motion was present ("motion" neurons). The "motion" neurons were predominantly found in the upper bank/fundus of the STS, and "snapshot" neurons in the lower bank of the STS and inferior temporal convexity. Most "motion" neurons showed strong response modulation during the course of an action, thus responding to action kinematics. "Motion" neurons displayed a greater average selectivity for these simple arm actions than did "snapshot" neurons. We suggest that the "motion" neurons code for visual kinematics, whereas the "snapshot" neurons code for form/posture, and that both can contribute to action recognition, in agreement with computation models of action recognition.
NASA Technical Reports Server (NTRS)
Franke, John M.; Rhodes, David B.; Jones, Stephen B.; Dismond, Harriet R.
1992-01-01
A technique for synchronizing a pulse light source to charge coupled device cameras is presented. The technique permits the use of pulse light sources for continuous as well as stop action flow visualization. The technique has eliminated the need to provide separate lighting systems at facilities requiring continuous and stop action viewing or photography.
76 FR 6085 - Draft Regulatory Guide: Issuance, Availability
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-03
...-2011-0014] RIN 3150-AI49 Draft Regulatory Guide: Issuance, Availability AGENCY: Nuclear Regulatory Commission. ACTION: Notice Availability of Draft Regulatory Guide. SUMMARY: The U.S. Nuclear Regulatory Commission (Commission or NRC) is issuing for public comment Draft Regulatory Guide, DG-5019, ``Reporting and...
Chow, John W; Stokic, Dobrivoje S
2018-03-01
We examined changes in variability, accuracy, frequency composition, and temporal regularity of force signal from vision-guided to memory-guided force-matching tasks in 17 subacute stroke and 17 age-matched healthy subjects. Subjects performed a unilateral isometric knee extension at 10, 30, and 50% of peak torque [maximum voluntary contraction (MVC)] for 10 s (3 trials each). Visual feedback was removed at the 5-s mark in the first two trials (feedback withdrawal), and 30 s after the second trial the subjects were asked to produce the target force without visual feedback (force recall). The coefficient of variation and constant error were used to quantify force variability and accuracy. Force structure was assessed by the median frequency, relative spectral power in the 0-3-Hz band, and sample entropy of the force signal. At 10% MVC, the force signal in subacute stroke subjects became steadier, more broadband, and temporally more irregular after the withdrawal of visual feedback, with progressively larger error at higher contraction levels. Also, the lack of modulation in the spectral frequency at higher force levels with visual feedback persisted in both the withdrawal and recall conditions. In terms of changes from the visual feedback condition, the feedback withdrawal produced a greater difference between the paretic, nonparetic, and control legs than the force recall. The overall results suggest improvements in force variability and structure from vision- to memory-guided force control in subacute stroke despite decreased accuracy. Different sensory-motor memory retrieval mechanisms seem to be involved in the feedback withdrawal and force recall conditions, which deserves further study. NEW & NOTEWORTHY We demonstrate that in the subacute phase of stroke, force signals during a low-level isometric knee extension become steadier, more broadband in spectral power, and more complex after removal of visual feedback. Larger force errors are produced when recalling target forces than immediately after withdrawing visual feedback. Although visual feedback offers better accuracy, it worsens force variability and structure in subacute stroke. The feedback withdrawal and force recall conditions seem to involve different memory retrieval mechanisms.
Hirshfield, Kim M.; Tolkunov, Denis; Zhong, Hua; Ali, Siraj M.; Stein, Mark N.; Murphy, Susan; Vig, Hetal; Vazquez, Alexei; Glod, John; Moss, Rebecca A.; Belyi, Vladimir; Chan, Chang S.; Chen, Suzie; Goodell, Lauri; Foran, David; Yelensky, Roman; Palma, Norma A.; Sun, James X.; Miller, Vincent A.; Stephens, Philip J.; Ross, Jeffrey S.; Kaufman, Howard; Poplin, Elizabeth; Mehnert, Janice; Tan, Antoinette R.; Bertino, Joseph R.; Aisner, Joseph; DiPaola, Robert S.
2016-01-01
Background. The frequency with which targeted tumor sequencing results will lead to implemented change in care is unclear. Prospective assessment of the feasibility and limitations of using genomic sequencing is critically important. Methods. A prospective clinical study was conducted on 100 patients with diverse-histology, rare, or poor-prognosis cancers to evaluate the clinical actionability of a Clinical Laboratory Improvement Amendments (CLIA)-certified, comprehensive genomic profiling assay (FoundationOne), using formalin-fixed, paraffin-embedded tumors. The primary objectives were to assess utility, feasibility, and limitations of genomic sequencing for genomically guided therapy or other clinical purpose in the setting of a multidisciplinary molecular tumor board. Results. Of the tumors from the 92 patients with sufficient tissue, 88 (96%) had at least one genomic alteration (average 3.6, range 0–10). Commonly altered pathways included p53 (46%), RAS/RAF/MAPK (rat sarcoma; rapidly accelerated fibrosarcoma; mitogen-activated protein kinase) (45%), receptor tyrosine kinases/ligand (44%), PI3K/AKT/mTOR (phosphatidylinositol-4,5-bisphosphate 3-kinase; protein kinase B; mammalian target of rapamycin) (35%), transcription factors/regulators (31%), and cell cycle regulators (30%). Many low frequency but potentially actionable alterations were identified in diverse histologies. Use of comprehensive profiling led to implementable clinical action in 35% of tumors with genomic alterations, including genomically guided therapy, diagnostic modification, and trigger for germline genetic testing. Conclusion. Use of targeted next-generation sequencing in the setting of an institutional molecular tumor board led to implementable clinical action in more than one third of patients with rare and poor-prognosis cancers. Major barriers to implementation of genomically guided therapy were clinical status of the patient and drug access. Early and serial sequencing in the clinical course and expanded access to genomically guided early-phase clinical trials and targeted agents may increase actionability. Implications for Practice: Identification of key factors that facilitate use of genomic tumor testing results and implementation of genomically guided therapy may lead to enhanced benefit for patients with rare or difficult to treat cancers. Clinical use of a targeted next-generation sequencing assay in the setting of an institutional molecular tumor board led to implementable clinical action in over one third of patients with rare and poor prognosis cancers. The major barriers to implementation of genomically guided therapy were clinical status of the patient and drug access both on trial and off label. Approaches to increase actionability include early and serial sequencing in the clinical course and expanded access to genomically guided early phase clinical trials and targeted agents. PMID:27566247
Perceptual training yields rapid improvements in visually impaired youth
Nyquist, Jeffrey B.; Lappin, Joseph S.; Zhang, Ruyuan; Tadin, Duje
2016-01-01
Visual function demands coordinated responses to information over a wide field of view, involving both central and peripheral vision. Visually impaired individuals often seem to underutilize peripheral vision, even in absence of obvious peripheral deficits. Motivated by perceptual training studies with typically sighted adults, we examined the effectiveness of perceptual training in improving peripheral perception of visually impaired youth. Here, we evaluated the effectiveness of three training regimens: (1) an action video game, (2) a psychophysical task that combined attentional tracking with a spatially and temporally unpredictable motion discrimination task, and (3) a control video game. Training with both the action video game and modified attentional tracking yielded improvements in visual performance. Training effects were generally larger in the far periphery and appear to be stable 12 months after training. These results indicate that peripheral perception might be under-utilized by visually impaired youth and that this underutilization can be improved with only ~8 hours of perceptual training. Moreover, the similarity of improvements following attentional tracking and action video-game training suggest that well-documented effects of action video-game training might be due to the sustained deployment of attention to multiple dynamic targets while concurrently requiring rapid attending and perception of unpredictable events. PMID:27901026
Hilbert, Sebastian; Sommer, Philipp; Gutberlet, Matthias; Gaspar, Thomas; Foldyna, Borek; Piorkowski, Christopher; Weiss, Steffen; Lloyd, Thomas; Schnackenburg, Bernhard; Krueger, Sascha; Fleiter, Christian; Paetsch, Ingo; Jahnke, Cosima; Hindricks, Gerhard; Grothoff, Matthias
2016-04-01
Recently cardiac magnetic resonance (CMR) imaging has been found feasible for the visualization of the underlying substrate for cardiac arrhythmias as well as for the visualization of cardiac catheters for diagnostic and ablation procedures. Real-time CMR-guided cavotricuspid isthmus ablation was performed in a series of six patients using a combination of active catheter tracking and catheter visualization using real-time MR imaging. Cardiac magnetic resonance utilizing a 1.5 T system was performed in patients under deep propofol sedation. A three-dimensional-whole-heart sequence with navigator technique and a fast automated segmentation algorithm was used for online segmentation of all cardiac chambers, which were thereafter displayed on a dedicated image guidance platform. In three out of six patients complete isthmus block could be achieved in the MR scanner, two of these patients did not need any additional fluoroscopy. In the first patient technical issues called for a completion of the procedure in a conventional laboratory, in another two patients the isthmus was partially blocked by magnetic resonance imaging (MRI)-guided ablation. The mean procedural time for the MR procedure was 109 ± 58 min. The intubation of the CS was performed within a mean time of 2.75 ± 2.21 min. Total fluoroscopy time for completion of the isthmus block ranged from 0 to 7.5 min. The combination of active catheter tracking and passive real-time visualization in CMR-guided electrophysiologic (EP) studies using advanced interventional hardware and software was safe and enabled efficient navigation, mapping, and ablation. These cases demonstrate significant progress in the development of MR-guided EP procedures. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.
Action Intentions Modulate Allocation of Visual Attention: Electrophysiological Evidence
Wykowska, Agnieszka; Schubö, Anna
2012-01-01
In line with the Theory of Event Coding (Hommel et al., 2001), action planning has been shown to affect perceptual processing – an effect that has been attributed to a so-called intentional weighting mechanism (Wykowska et al., 2009; Hommel, 2010). This paper investigates the electrophysiological correlates of action-related modulations of selection mechanisms in visual perception. A paradigm combining a visual search task for size and luminance targets with a movement task (grasping or pointing) was introduced, and the EEG was recorded while participants were performing the tasks. The results showed that the behavioral congruency effects, i.e., better performance in congruent (relative to incongruent) action-perception trials have been reflected by a modulation of the P1 component as well as the N2pc (an ERP marker of spatial attention). These results support the argumentation that action planning modulates already early perceptual processing and attention mechanisms. PMID:23060841
Sleep Disturbances among Persons Who Are Visually Impaired: Survey of Dog Guide Users.
ERIC Educational Resources Information Center
Fouladi, Massoud K.; Moseley, Merrick J.; Jones, Helen S.; Tobin, Michael J.
1998-01-01
A survey completed by 1237 adults with severe visual impairments found that 20% described the quality of their sleep as poor or very poor. Exercise was associated with better sleep and depression with poorer sleep. However, visual acuity did not predict sleep quality, casting doubt on the idea that restricted visual input (light) causes sleep…
Visual Literacy for Libraries: A Practical, Standards-Based Guide
ERIC Educational Resources Information Center
Brown, Nicole E.; Bussert, Kaila; Hattwig, Denise; Medaille, Ann
2016-01-01
The importance of images and visual media in today's culture is changing what it means to be literate in the 21st century. Digital technologies have made it possible for almost anyone to create and share visual media. Yet the pervasiveness of images and visual media does not necessarily mean that individuals are able to critically view, use, and…
Prototyping Visual Learning Analytics Guided by an Educational Theory Informed Goal
ERIC Educational Resources Information Center
Hillaire, Garron; Rappolt-Schlichtmann, Gabrielle; Ducharme, Kim
2016-01-01
Prototype work can support the creation of data visualizations throughout the research and development process through paper prototypes with sketching, designed prototypes with graphic design tools, and functional prototypes to explore how the implementation will work. One challenging aspect of data visualization work is coordinating the expertise…
Are Spatial Visualization Abilities Relevant to Virtual Reality?
ERIC Educational Resources Information Center
Chen, Chwen Jen
2006-01-01
This study aims to investigate the effects of virtual reality (VR)-based learning environment on learners of different spatial visualization abilities. The findings of the aptitude-by-treatment interaction study have shown that learners benefit most from the Guided VR mode, irrespective of their spatial visualization abilities. This indicates that…
Exploring Visual Arts and Crafts Careers. A Student Guidebook.
ERIC Educational Resources Information Center
Dubman, Shelia; And Others
One of six student guidebooks in a series of 11 arts and humanities career exploration guides for grade 7-12 teachers, counselors, and students, this student book on exploration of visual arts and crafts careers presents information on specific occupations in seven different career areas: Visual communications, product design, environmental…
Visually Guided Step Descent in Children with Williams Syndrome
ERIC Educational Resources Information Center
Cowie, Dorothy; Braddick, Oliver; Atkinson, Janette
2012-01-01
Individuals with Williams syndrome (WS) have impairments in visuospatial tasks and in manual visuomotor control, consistent with parietal and cerebellar abnormalities. Here we examined whether individuals with WS also have difficulties in visually controlling whole-body movements. We investigated visual control of stepping down at a change of…
Guiding Visual Attention in Decision Making--Verbal Instructions versus Flicker Cueing
ERIC Educational Resources Information Center
Canal-Bruland, Rouwen
2009-01-01
Perceptual-cognitive processes play an important role in open, fast-paced, interceptive sports such as tennis, basketball, and soccer. Visual information processing has been shown to distinguish skilled from less skilled athletes. Research on the perceptual demands of sports performance has raised questions regarding athletes' visual information…
Learning from Chemical Visualizations: Comparing Generation and Selection
ERIC Educational Resources Information Center
Zhang, Zhihui Helen; Linn, Marcia C.
2013-01-01
Dynamic visualizations can make unseen phenomena such as chemical reactions visible but students need guidance to benefit from them. This study explores the value of generating drawings versus selecting among alternatives to guide students to learn chemical reactions from a dynamic visualization of hydrogen combustion as part of an online inquiry…
Action for Advocates of Family Literacy.
ERIC Educational Resources Information Center
National Center for Family Literacy, Louisville, KY.
Focusing on advocacy (as distinct from lobbying) at the federal level of government, the purpose of this guide is to help develop and implement an action plan for family literacy advocacy. Its advice may be adapted to state and local elected officials as well as those in the non-political environment. Sections of the guide address advocating for…
Teen Drinking Prevention Program. Community Action Guide.
ERIC Educational Resources Information Center
Substance Abuse and Mental Health Services Administration (DHHS/PHS), Rockville, MD. Center for Substance Abuse Prevention.
Preventing the use of alcohol and other drugs by young people is a critical issue for all Americans. This action guide is designed to help communities create programs that prevent the tragedies caused by underage drinking. It is intended as a tool that communities can use to create a broad-based public education program in which they can…
Taking on Turnover: An Action Guide for Child Care Center Teachers and Directors.
ERIC Educational Resources Information Center
Whitebook, Marcy; Bellm, Dan
Based on the "Taking On Turnover" training series conducted by the Center for the Child Care Workforce, this action guide for center-based child care teachers and directors is designed to assist in managing and reducing the increasingly serious problem of job turnover in the child care profession. Following several introductory sections,…
Visual perceptual learning by operant conditioning training follows rules of contingency.
Kim, Dongho; Seitz, Aaron R; Watanabe, Takeo
2015-01-01
Visual perceptual learning (VPL) can occur as a result of a repetitive stimulus-reward pairing in the absence of any task. This suggests that rules that guide Conditioning, such as stimulus-reward contingency (e.g. that stimulus predicts the likelihood of reward), may also guide the formation of VPL. To address this question, we trained subjects with an operant conditioning task in which there were contingencies between the response to one of three orientations and the presence of reward. Results showed that VPL only occurred for positive contingencies, but not for neutral or negative contingencies. These results suggest that the formation of VPL is influenced by similar rules that guide the process of Conditioning.
Visual perceptual learning by operant conditioning training follows rules of contingency
Kim, Dongho; Seitz, Aaron R; Watanabe, Takeo
2015-01-01
Visual perceptual learning (VPL) can occur as a result of a repetitive stimulus-reward pairing in the absence of any task. This suggests that rules that guide Conditioning, such as stimulus-reward contingency (e.g. that stimulus predicts the likelihood of reward), may also guide the formation of VPL. To address this question, we trained subjects with an operant conditioning task in which there were contingencies between the response to one of three orientations and the presence of reward. Results showed that VPL only occurred for positive contingencies, but not for neutral or negative contingencies. These results suggest that the formation of VPL is influenced by similar rules that guide the process of Conditioning. PMID:26028984
Murray, Jennifer; Williams, Brian; Hoskins, Gaylor; Skar, Silje; McGhee, John; Treweek, Shaun; Sniehotta, Falko F; Sheikh, Aziz; Brown, Gordon; Hagen, Suzanne; Cameron, Linda; Jones, Claire; Gauld, Dylan
2016-01-01
Visualisation techniques are used in a range of healthcare interventions. However, these frequently lack a coherent rationale or clear theoretical basis. This lack of definition and explicit targeting of the underlying mechanisms may impede the success of and evaluation of the intervention. We describe the theoretical development, deployment, and pilot evaluation, of a complex visually mediated behavioural intervention. The exemplar intervention focused on increasing physical activity among young people with asthma. We employed an explicit five-stage development model, which was actively supported by a consultative user group. The developmental stages involved establishing the theoretical basis, establishing a narrative structure, visual rendering, checking interpretation, and pilot testing. We conducted in-depth interviews and focus groups during early development and checking, followed by an online experiment for pilot testing. A total of 91 individuals, including young people with asthma, parents, teachers, and health professionals, were involved in development and testing. Our final intervention consisted of two components: (1) an interactive 3D computer animation to create intentions and (2) an action plan and volitional help sheet to promote the translation of intentions to behaviour. Theory was mediated throughout by visual and audio forms. The intervention was regarded as highly acceptable, engaging, and meaningful by all stakeholders. The perceived impact on asthma understanding and intentions was reported positively, with most individuals saying that the 3D computer animation had either clarified a range of issues or made them more real. Our five-stage model underpinned by extensive consultation worked well and is presented as a framework to support explicit decision-making for others developing theory informed visually mediated interventions. We have demonstrated the ability to develop theory-based visually mediated behavioural interventions. However, attention needs to be paid to the potential ambiguity associated with images and thus the concept of visual literacy among patients. Our revised model may be helpful as a guide to aid development, acceptability, and ultimately effectiveness.
Effects of action video game training on visual working memory.
Blacker, Kara J; Curby, Kim M; Klobusicky, Elizabeth; Chein, Jason M
2014-10-01
The ability to hold visual information in mind over a brief delay is critical for acquiring information and navigating a complex visual world. Despite the ubiquitous nature of visual working memory (VWM) in our everyday lives, this system is fundamentally limited in capacity. Therefore, the potential to improve VWM through training is a growing area of research. An emerging body of literature suggests that extensive experience playing action video games yields a myriad of perceptual and attentional benefits. Several lines of converging work suggest that action video game play may influence VWM as well. The current study utilized a training paradigm to examine whether action video games cause improvements to the quantity and/or the quality of information stored in VWM. The results suggest that VWM capacity, as measured by a change detection task, is increased after action video game training, as compared with training on a control game, and that some improvement to VWM precision occurs with action game training as well. However, these findings do not appear to extend to a complex span measure of VWM, which is often thought to tap into higher-order executive skills. The VWM improvements seen in individuals trained on an action video game cannot be accounted for by differences in motivation or engagement, differential expectations, or baseline differences in demographics as compared with the control group used. In sum, action video game training represents a potentially unique and engaging platform by which this severely capacity-limited VWM system might be enhanced.
77 FR 45535 - Aldicarb; Proposed Tolerance Actions
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-01
... Aldicarb; Proposed Tolerance Actions AGENCY: Environmental Protection Agency (EPA). ACTION: Proposed rule... Information A. Does this action apply to me? You may be potentially affected by this action if you are an... exhaustive, but rather provides a guide for readers regarding entities likely to be affected by this action...
Zhao, Jialiang
2014-03-01
Action plan for the prevention of avoidable blindness and visual impairment for 2014-2019 endorsed by 66(th) World Health Assembly is an important document for promoting the global prevention of blindness. This action plan summarized the experiences and lessons in the global prevention of avoidable blindness and visual impairment from 2009 to 2013, raised the global goal for the prevention of blindness-the reduction in prevalence of avoidable visual impairment by 25% by 2019 from the baseline of 2010, set up the monitoring indicators for realizing the global goal. This document can be served as a roadmap to consolidate joint efforts aimed at working towards universal eye health in the world. This action plan must give a deep and important impact on the prevention of blindness in China.We should implement the action plan for the prevention of avoidable blindness and visual impairment for 2014-2019 to push forward sustaining development of the prevention of blindness in China.
Distributive Education Resource Supplement to the Consumer Education Curriculum Guide for Ohio.
ERIC Educational Resources Information Center
Ohio State Dept. of Education, Columbus. Div. of Vocational Education.
The activities contained in the guide are designed to supplement the distributive education curriculum with information that will prepare the student to become a more informed, skillful employee and help the marketing career oriented student better visualize his customer's buying problems. Four overall objectives are stated. The guide is organized…
Fiscal Officer Training, 1999-2000. Participant's Guide.
ERIC Educational Resources Information Center
Department of Education, Washington, DC.
This guide is intended for use by participants (college fiscal officers, business officers, bursars, loan managers, etc.) in a two-day workshop on Title IV of the reauthorized Higher Education Act. The guide includes copies of the visual displays used in the workshop, space for individual notes, sample forms, sample computer screens, quizzes, and…
Techniques for Daily Living: Curriculum Guides.
ERIC Educational Resources Information Center
Wooldridge, Lillian; And Others
Presented are specific guides concerning techniques for daily living which were developed by the child care staff at the Illinois Braille and Sight Saving School. The guides are designed for cottage parents of the children, who may have both visual and other handicaps, and show what daily living skills are necessary and appropriate for the…
A Visual Arts Guide for Idaho Schools, Grades 7-12.
ERIC Educational Resources Information Center
Idaho State Dept. of Education, Boise.
Approximately 50 art activities for students in junior and senior high school are presented in this curriculum guide. Introductory sections define the roles of school superintendents, principals, art supervisors, and art teachers in supporting art programs, and outline goals and objectives of an art curriculum. The bulk of the guide consists of…
User's Guide for Flight Simulation Data Visualization Workstation
NASA Technical Reports Server (NTRS)
Kaplan, Joseph A.; Chen, Ronnie; Kenney, Patrick S.; Koval, Christopher M.; Hutchinson, Brian K.
1996-01-01
Today's modern flight simulation research produces vast amounts of time sensitive data. The meaning of this data can be difficult to assess while in its raw format. Therefore, a method of breaking the data down and presenting it to the user in a graphical format is necessary. Simulation Graphics (SimGraph) is intended as a data visualization software package that will incorporate simulation data into a variety of animated graphical displays for easy interpretation by the simulation researcher. This document is intended as an end user's guide.
Navigation-guided optic canal decompression for traumatic optic neuropathy: Two case reports.
Bhattacharjee, Kasturi; Serasiya, Samir; Kapoor, Deepika; Bhattacharjee, Harsha
2018-06-01
Two cases of traumatic optic neuropathy presented with profound loss of vision. Both cases received a course of intravenous corticosteroids elsewhere but did not improve. They underwent Navigation guided optic canal decompression via external transcaruncular approach, following which both cases showed visual improvement. Postoperative Visual Evoked Potential and optical coherence technology of Retinal nerve fibre layer showed improvement. These case reports emphasize on the role of stereotactic navigation technology for optic canal decompression in cases of traumatic optic neuropathy.
Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.
2000-01-01
Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.
Azizi, Elham; Abel, Larry A; Stainer, Matthew J
2017-02-01
Action game playing has been associated with several improvements in visual attention tasks. However, it is not clear how such changes might influence the way we overtly select information from our visual world (i.e. eye movements). We examined whether action-video-game training changed eye movement behaviour in a series of visual search tasks including conjunctive search (relatively abstracted from natural behaviour), game-related search, and more naturalistic scene search. Forty nongamers were trained in either an action first-person shooter game or a card game (control) for 10 hours. As a further control, we recorded eye movements of 20 experienced action gamers on the same tasks. The results did not show any change in duration of fixations or saccade amplitude either from before to after the training or between all nongamers (pretraining) and experienced action gamers. However, we observed a change in search strategy, reflected by a reduction in the vertical distribution of fixations for the game-related search task in the action-game-trained group. This might suggest learning the likely distribution of targets. In other words, game training only skilled participants to search game images for targets important to the game, with no indication of transfer to the more natural scene search. Taken together, these results suggest no modification in overt allocation of attention. Either the skills that can be trained with action gaming are not powerful enough to influence information selection through eye movements, or action-game-learned skills are not used when deciding where to move the eyes.
Visual Feedback Dominates the Sense of Agency for Brain-Machine Actions
Evans, Nathan; Gale, Steven; Schurger, Aaron; Blanke, Olaf
2015-01-01
Recent advances in neuroscience and engineering have led to the development of technologies that permit the control of external devices through real-time decoding of brain activity (brain-machine interfaces; BMI). Though the feeling of controlling bodily movements (sense of agency; SOA) has been well studied and a number of well-defined sensorimotor and cognitive mechanisms have been put forth, very little is known about the SOA for BMI-actions. Using an on-line BMI, and verifying that our subjects achieved a reasonable level of control, we sought to describe the SOA for BMI-mediated actions. Our results demonstrate that discrepancies between decoded neural activity and its resultant real-time sensory feedback are associated with a decrease in the SOA, similar to SOA mechanisms proposed for bodily actions. However, if the feedback discrepancy serves to correct a poorly controlled BMI-action, then the SOA can be high and can increase with increasing discrepancy, demonstrating the dominance of visual feedback on the SOA. Taken together, our results suggest that bodily and BMI-actions rely on common mechanisms of sensorimotor integration for agency judgments, but that visual feedback dominates the SOA in the absence of overt bodily movements or proprioceptive feedback, however erroneous the visual feedback may be. PMID:26066840
Simple control-theoretic models of human steering activity in visually guided vehicle control
NASA Technical Reports Server (NTRS)
Hess, Ronald A.
1991-01-01
A simple control theoretic model of human steering or control activity in the lateral-directional control of vehicles such as automobiles and rotorcraft is discussed. The term 'control theoretic' is used to emphasize the fact that the model is derived from a consideration of well-known control system design principles as opposed to psychological theories regarding egomotion, etc. The model is employed to emphasize the 'closed-loop' nature of tasks involving the visually guided control of vehicles upon, or in close proximity to, the earth and to hypothesize how changes in vehicle dynamics can significantly alter the nature of the visual cues which a human might use in such tasks.
Hendrix, Philipp; Senger, Sebastian; Griessenauer, Christoph J; Simgen, Andreas; Linsler, Stefan; Oertel, Joachim
2018-01-01
To report a technique for endoscopic cystoventriculostomy guided by preoperative navigated transcranial magnetic stimulation (nTMS) and tractography in a patient with a large speech eloquent arachnoid cyst. A 74-year old woman presented with a seizure and subsequent persistent anomic aphasia from a progressive left-sided parietal arachnoid cyst. An endoscopic cystoventriculostomy and endoscope-assisted ventricle catheter placement were performed. Surgery was guided by preoperative nTMS and tractography to avoid eloquent language, motor, and visual pathways. Preoperative nTMS motor and language mapping were used to guide tractography of motor and language white matter tracts. The ideal locations of entry point and cystoventriculostomy as well as trajectory for stent-placement were determined preoperatively with a pseudo-3-dimensional model visualizing eloquent language, motor, and visual cortical and subcortical information. The early postoperative course was uneventful. At her 3-month follow-up visit, her language impairments had completely recovered. Additionally, magnetic resonance imaging demonstrated complete collapse of the arachnoid cyst. The combination of nTMS and tractography supports the identification of a safe trajectory for cystoventriculostomy in eloquent arachnoid cysts. Copyright © 2017 Elsevier Inc. All rights reserved.
Weeks, Margaret R; Li, Jianghong; Lounsbury, David; Green, Helena Danielle; Abbott, Maryann; Berman, Marcie; Rohena, Lucy; Gonzalez, Rosely; Lang, Shawn; Mosher, Heather
2017-12-01
Achieving community-level goals to eliminate the HIV epidemic requires coordinated efforts through community consortia with a common purpose to examine and critique their own HIV testing and treatment (T&T) care system and build effective tools to guide their efforts to improve it. Participatory system dynamics (SD) modeling offers conceptual, methodological, and analytical tools to engage diverse stakeholders in systems conceptualization and visual mapping of dynamics that undermine community-level health outcomes and identify those that can be leveraged for systems improvement. We recruited and engaged a 25-member multi-stakeholder Task Force, whose members provide or utilize HIV-related services, to participate in SD modeling to examine and address problems of their local HIV T&T service system. Findings from the iterative model building sessions indicated Task Force members' increasingly complex understanding of the local HIV care system and demonstrated their improved capacity to visualize and critique multiple models of the HIV T&T service system and identify areas of potential leverage. Findings also showed members' enhanced communication and consensus in seeking deeper systems understanding and options for solutions. We discuss implications of using these visual SD models for subsequent simulation modeling of the T&T system and for other community applications to improve system effectiveness. © Society for Community Research and Action 2017.
Effect of visual and tactile feedback on kinematic synergies in the grasping hand.
Patel, Vrajeshri; Burns, Martin; Vinjamuri, Ramana
2016-08-01
The human hand uses a combination of feedforward and feedback mechanisms to accomplish high degree of freedom in grasp control efficiently. In this study, we used a synergy-based control model to determine the effect of sensory feedback on kinematic synergies in the grasping hand. Ten subjects performed two types of grasps: one that included feedback (real) and one without feedback (memory-guided), at two different speeds (rapid and natural). Kinematic synergies were extracted from rapid real and rapid memory-guided grasps using principal component analysis. Synergies extracted from memory-guided grasps revealed greater preservation of natural inter-finger relationships than those found in corresponding synergies extracted from real grasps. Reconstruction of natural real and natural memory-guided grasps was used to test performance and generalizability of synergies. A temporal analysis of reconstruction patterns revealed the differing contribution of individual synergies in real grasps versus memory-guided grasps. Finally, the results showed that memory-guided synergies could not reconstruct real grasps as accurately as real synergies could reconstruct memory-guided grasps. These results demonstrate how visual and tactile feedback affects a closed-loop synergy-based motor control system.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-15
... Receiving Packages of Radioactive Material AGENCY: Nuclear Regulatory Commission. ACTION: Notice of... Guide (RG) 7.3, ``Procedures for Picking Up and Receiving Packages of Radioactive Material.'' The guide..., ``Administrative Guide for Verifying Compliance with Packaging Requirements for Shipment and Receipt of Radioactive...
75 FR 52996 - Draft Regulatory Guide: Issuance, Availability
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-30
... NUCLEAR REGULATORY COMMISSION [NRC-2010-0287] Draft Regulatory Guide: Issuance, Availability AGENCY: Nuclear Regulatory Commission. ACTION: Notice of issuance and availability of Draft Regulatory Guide, DG-8035, ``Administrative Practices in Radiation Surveys and Monitoring.'' FOR FURTHER...
76 FR 38212 - Notice of Issuance of Regulatory Guide
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-29
... NUCLEAR REGULATORY COMMISSION [NRC-2010-0275] Notice of Issuance of Regulatory Guide AGENCY: Nuclear Regulatory Commission. ACTION: Notice of Issuance and Availability of Revision 1 of Regulatory Guide (RG) 1.179, ``Standard Format [[Page 38213
Drivers’ Visual Behavior-Guided RRT Motion Planner for Autonomous On-Road Driving
Du, Mingbo; Mei, Tao; Liang, Huawei; Chen, Jiajia; Huang, Rulin; Zhao, Pan
2016-01-01
This paper describes a real-time motion planner based on the drivers’ visual behavior-guided rapidly exploring random tree (RRT) approach, which is applicable to on-road driving of autonomous vehicles. The primary novelty is in the use of the guidance of drivers’ visual search behavior in the framework of RRT motion planner. RRT is an incremental sampling-based method that is widely used to solve the robotic motion planning problems. However, RRT is often unreliable in a number of practical applications such as autonomous vehicles used for on-road driving because of the unnatural trajectory, useless sampling, and slow exploration. To address these problems, we present an interesting RRT algorithm that introduces an effective guided sampling strategy based on the drivers’ visual search behavior on road and a continuous-curvature smooth method based on B-spline. The proposed algorithm is implemented on a real autonomous vehicle and verified against several different traffic scenarios. A large number of the experimental results demonstrate that our algorithm is feasible and efficient for on-road autonomous driving. Furthermore, the comparative test and statistical analyses illustrate that its excellent performance is superior to other previous algorithms. PMID:26784203
Drivers' Visual Behavior-Guided RRT Motion Planner for Autonomous On-Road Driving.
Du, Mingbo; Mei, Tao; Liang, Huawei; Chen, Jiajia; Huang, Rulin; Zhao, Pan
2016-01-15
This paper describes a real-time motion planner based on the drivers' visual behavior-guided rapidly exploring random tree (RRT) approach, which is applicable to on-road driving of autonomous vehicles. The primary novelty is in the use of the guidance of drivers' visual search behavior in the framework of RRT motion planner. RRT is an incremental sampling-based method that is widely used to solve the robotic motion planning problems. However, RRT is often unreliable in a number of practical applications such as autonomous vehicles used for on-road driving because of the unnatural trajectory, useless sampling, and slow exploration. To address these problems, we present an interesting RRT algorithm that introduces an effective guided sampling strategy based on the drivers' visual search behavior on road and a continuous-curvature smooth method based on B-spline. The proposed algorithm is implemented on a real autonomous vehicle and verified against several different traffic scenarios. A large number of the experimental results demonstrate that our algorithm is feasible and efficient for on-road autonomous driving. Furthermore, the comparative test and statistical analyses illustrate that its excellent performance is superior to other previous algorithms.
Enhanced visual short-term memory in action video game players.
Blacker, Kara J; Curby, Kim M
2013-08-01
Visual short-term memory (VSTM) is critical for acquiring visual knowledge and shows marked individual variability. Previous work has illustrated a VSTM advantage among action video game players (Boot et al. Acta Psychologica 129:387-398, 2008). A growing body of literature has suggested that action video game playing can bolster visual cognitive abilities in a domain-general manner, including abilities related to visual attention and the speed of processing, providing some potential bases for this VSTM advantage. In the present study, we investigated the VSTM advantage among video game players and assessed whether enhanced processing speed can account for this advantage. Experiment 1, using simple colored stimuli, revealed that action video game players demonstrate a similar VSTM advantage over nongamers, regardless of whether they are given limited or ample time to encode items into memory. Experiment 2, using complex shapes as the stimuli to increase the processing demands of the task, replicated this VSTM advantage, irrespective of encoding duration. These findings are inconsistent with a speed-of-processing account of this advantage. An alternative, attentional account, grounded in the existing literature on the visuo-cognitive consequences of video game play, is discussed.
Petruno, Sarah K; Clark, Robert E; Reinagel, Pamela
2013-01-01
The pigmented Long-Evans rat has proven to be an excellent subject for studying visually guided behavior including quantitative visual psychophysics. This observation, together with its experimental accessibility and its close homology to the mouse, has made it an attractive model system in which to dissect the thalamic and cortical circuits underlying visual perception. Given that visually guided behavior in the absence of primary visual cortex has been described in the literature, however, it is an empirical question whether specific visual behaviors will depend on primary visual cortex in the rat. Here we tested the effects of cortical lesions on performance of two-alternative forced-choice visual discriminations by Long-Evans rats. We present data from one highly informative subject that learned several visual tasks and then received a bilateral lesion ablating >90% of primary visual cortex. After the lesion, this subject had a profound and persistent deficit in complex image discrimination, orientation discrimination, and full-field optic flow motion discrimination, compared with both pre-lesion performance and sham-lesion controls. Performance was intact, however, on another visual two-alternative forced-choice task that required approaching a salient visual target. A second highly informative subject learned several visual tasks prior to receiving a lesion ablating >90% of medial extrastriate cortex. This subject showed no impairment on any of the four task categories. Taken together, our data provide evidence that these image, orientation, and motion discrimination tasks require primary visual cortex in the Long-Evans rat, whereas approaching a salient visual target does not.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michimoto, Kenkichi, E-mail: michikoo@jikei.ac.jp; Shimizu, Kanichiro; Kameoka, Yoshihiko
PurposeTo retrospectively evaluate the feasibility of transcatheter arterial embolization (TAE) using a mixture of absolute ethanol and iodized oil to improve localization of endophytic renal masses on unenhanced computed tomography (CT) prior to CT-guided percutaneous cryoablation (PCA).Materials and MethodsOur institutional review board approved this retrospective study. From September 2011 to June 2015, 17 patients (mean age, 66.8 years) with stage T1a endophytic renal masses (mean diameter, 26.5 mm) underwent TAE using a mixture of absolute ethanol and iodized oil to improve visualization of small and endophytic renal masses on unenhanced CT prior to CT-guided PCA. TAE was considered successful that accumulated iodizedmore » oil depicted whole of the tumor edge on CT. PCA was considered successful when the iceball covered the entire tumor with over a 5 mm margin. Oncological and renal functional outcomes and complications were also evaluated.ResultsTAE was successfully performed in 16 of 17 endophytic tumors. The 16 tumors were performed under CT-guided PCA with their distinct visualization of localization and safe ablated margin. During the mean follow-up period of 15.4 ± 5.1 months, one patient developed local recurrence. Estimated glomerular filtration rate declined by 8 % with statistical significance (P = 0.01). There was no procedure-related significant complication.ConclusionTAE using a mixture of absolute ethanol and iodized oil to improve visualization of endophytic renal masses facilitated tumor localization on unenhanced CT, permitting depiction of the tumor edge as well as a safe margin for ablation during CT-guided PCA, with an acceptable decline in renal function.« less
3D Scientific Visualization with Blender
NASA Astrophysics Data System (ADS)
Kent, Brian R.
2015-03-01
This is the first book written on using Blender for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.
Vision-guided ocular growth in a mutant chicken model with diminished visual acuity
Ritchey, Eric R.; Zelinka, Christopher; Tang, Junhua; Liu, Jun; Code, Kimberly A.; Petersen-Jones, Simon; Fischer, Andy J.
2012-01-01
Visual experience is known to guide ocular growth. We tested the hypothesis that vision-guided ocular growth is disrupted in a model system with diminished visual acuity. We examine whether ocular elongation is influenced by form-deprivation (FD) and lens-imposed defocus in the Retinopathy, Globe Enlarged (RGE) chicken. Young RGE chicks have poor visual acuity, without significant retinal pathology, resulting from a mutation in guanine nucleotide-binding protein β3 (GNB3), also known as transducin β3 or Gβ3. The mutation in GNB3 destabilizes the protein and causes a loss of Gβ3 from photoreceptors and ON-bipolar cells. (Ritchey et al. 2010)FD increased ocular elongation in RGE eyes in a manner similar to that seen in wild-type (WT) eyes. By comparison, the excessive ocular elongation that results from hyperopic defocus was increased, whereas myopic defocus failed to significantly decrease ocular elongation in RGE eyes. Brief daily periods of unrestricted vision interrupting FD prevented ocular elongation in RGE chicks in a manner similar to that seen in WT chicks. Glucagonergic amacrine cells differentially expressed the immediate early gene Egr1 in response to growth-guiding stimuli in RGE retinas, but the defocus-dependent up-regulation of Egr1 was lesser in RGE retinas compared to that of WT retinas. We conclude that high visual acuity, and the retinal signaling mediated by Gβ3, is not required for emmetropization and the excessive ocular elongation caused by FD and hyperopic defocus. However, the loss of acuity and Gβ3 from RGE retinas causes enhanced responses to hyperopic defocus and diminished responses to myopic defocus. PMID:22824538
Selection-for-action in visual search.
Hannus, Aave; Cornelissen, Frans W; Lindemann, Oliver; Bekkering, Harold
2005-01-01
Grasping an object rather than pointing to it enhances processing of its orientation but not its color. Apparently, visual discrimination is selectively enhanced for a behaviorally relevant feature. In two experiments we investigated the limitations and targets of this bias. Specifically, in Experiment 1 we were interested to find out whether the effect is capacity demanding, therefore we manipulated the set-size of the display. The results indicated a clear cognitive processing capacity requirement, i.e. the magnitude of the effect decreased for a larger set size. Consequently, in Experiment 2, we investigated if the enhancement effect occurs only at the level of behaviorally relevant feature or at a level common to different features. Therefore we manipulated the discriminability of the behaviorally neutral feature (color). Again, results showed that this manipulation influenced the action enhancement of the behaviorally relevant feature. Particularly, the effect of the color manipulation on the action enhancement suggests that the action effect is more likely to bias the competition between different visual features rather than to enhance the processing of the relevant feature. We offer a theoretical account that integrates the action-intention effect within the biased competition model of visual selective attention.
Disaster medicine through Google Glass.
Carenzo, Luca; Barra, Federico Lorenzo; Ingrassia, Pier Luigi; Colombo, Davide; Costa, Alessandro; Della Corte, Francesco
2015-06-01
Nontechnical skills can make a difference in the management of disasters and mass casualty incidents and any tool helping providers in action might improve their ability to respond to such events. Google Glass, released by Google as a new personal communication device, could play a role in this field. We recently tested Google Glass during a full-scale exercise to perform visually guided augmented-reality Simple Triage and Rapid Treatment triage using a custom-made application and to identify casualties and collect georeferenced notes, photos, and videos to be incorporated into the debriefing. Despite some limitations (battery life and privacy concerns), Glass is a promising technology both for telemedicine applications and augmented-reality disaster response support to increase operators' performance, helping them to make better choices on the field; to optimize timings; and finally represents an excellent option to take professional education to a higher level.
ERIC Educational Resources Information Center
Thormann, Joan, Ed.; And Others
This guide for school administrators interested in technology integration in the curriculum was developed from discussions at the Technology Integration Seminar held in March, 1991. The guide is divided into five chapters covering various administrative responsibilities and action steps. The first chapter presents an overview and identifies…
Organizing Community-Wide Dialogue for Action and Change: A Step-by-Step Guide.
ERIC Educational Resources Information Center
Campbell, Sarah vL.; Malick, Amy; Landesman, John; Barrett, Molly Holme; Leighninger, Matt; McCoy, Martha L.; Scully, Patrick L.
This document is a step-by-step guide to organizing a study circle program to serve as a vehicle to achieve communitywide dialogue for action and change. Part 1 provides an overview of communitywide study circle programs, with special emphasis on their operation and impact. Part 2 details the following steps in organizing a communitywide study…
Virtual Worlds, Virtual Literacy: An Educational Exploration
ERIC Educational Resources Information Center
Stoerger, Sharon
2008-01-01
Virtual worlds enable students to learn through seeing, knowing, and doing within visually rich and mentally engaging spaces. Rather than reading about events, students become part of the events through the adoption of a pre-set persona. Along with visual feedback that guides the players' activities and the development of visual skills, visual…
Using Visual Literacy to Teach Science Academic Language: Experiences from Three Preservice Teachers
ERIC Educational Resources Information Center
Kelly-Jackson, Charlease; Delacruz, Stacy
2014-01-01
This original pedagogical study captured three preservice teachers' experiences using visual literacy strategies as an approach to teaching English language learners (ELLs) science academic language. The following research questions guided this study: (1) What are the experiences of preservice teachers' use of visual literacy to teach science…
Task Demands Control Acquisition and Storage of Visual Information
ERIC Educational Resources Information Center
Droll, Jason A.; Hayhoe, Mary M.; Triesch, Jochen; Sullivan, Brian T.
2005-01-01
Attention and working memory limitations set strict limits on visual representations, yet researchers have little appreciation of how these limits constrain the acquisition of information in ongoing visually guided behavior. Subjects performed a brick sorting task in a virtual environment. A change was made to 1 of the features of the brick being…
Self-Monitoring of Gaze in High Functioning Autism
ERIC Educational Resources Information Center
Grynszpan, Ouriel; Nadel, Jacqueline; Martin, Jean-Claude; Simonin, Jerome; Bailleul, Pauline; Wang, Yun; Gepner, Daniel; Le Barillier, Florence; Constant, Jacques
2012-01-01
Atypical visual behaviour has been recently proposed to account for much of social misunderstanding in autism. Using an eye-tracking system and a gaze-contingent lens display, the present study explores self-monitoring of eye motion in two conditions: free visual exploration and guided exploration via blurring the visual field except for the focal…
Visual Landmarks Facilitate Rodent Spatial Navigation in Virtual Reality Environments
ERIC Educational Resources Information Center
Youngstrom, Isaac A.; Strowbridge, Ben W.
2012-01-01
Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain…
Evidence from Visuomotor Adaptation for Two Partially Independent Visuomotor Systems
ERIC Educational Resources Information Center
Thaler, Lore; Todd, James T.
2010-01-01
Visual information can specify spatial layout with respect to the observer (egocentric) or with respect to an external frame of reference (allocentric). People can use both of these types of visual spatial information to guide their hands. The question arises if movements based on egocentric and movements based on allocentric visual information…
The Preference of Visualization in Teaching and Learning Absolute Value
ERIC Educational Resources Information Center
Konyalioglu, Alper Cihan; Aksu, Zeki; Senel, Esma Ozge
2012-01-01
Visualization is mostly despised although it complements and--sometimes--guides the analytical process. This study mainly investigates teachers' preferences concerning the use of the visualization method and determines the extent to which they encourage their students to make use of it within the problem-solving process. This study was conducted…
McGuckian, Thomas B; Cole, Michael H; Pepping, Gert-Jan
2018-04-01
To visually perceive opportunities for action, athletes rely on the movements of their eyes, head and body to explore their surrounding environment. To date, the specific types of technology and their efficacy for assessing the exploration behaviours of association footballers have not been systematically reviewed. This review aimed to synthesise the visual perception and exploration behaviours of footballers according to the task constraints, action requirements of the experimental task, and level of expertise of the athlete, in the context of the technology used to quantify the visual perception and exploration behaviours of footballers. A systematic search for papers that included keywords related to football, technology, and visual perception was conducted. All 38 included articles utilised eye-movement registration technology to quantify visual perception and exploration behaviour. The experimental domain appears to influence the visual perception behaviour of footballers, however no studies investigated exploration behaviours of footballers in open-play situations. Studies rarely utilised representative stimulus presentation or action requirements. To fully understand the visual perception requirements of athletes, it is recommended that future research seek to validate alternate technologies that are capable of investigating the eye, head and body movements associated with the exploration behaviours of footballers during representative open-play situations.
Begley, Ann Marie
2005-11-01
A virtue centred approach to ethics has been criticized for being vague owing to the nature of its central concept, the paradigm person. From the perspective of the practitioner the most damaging charge is that virtue ethics fails to be action guiding and, in addition to this, it does not offer any means of act appraisal. These criticisms leave virtue ethics in a weak position vis-à-vis traditional approaches to ethics. The criticism is, however, challenged by Hursthouse in her analysis of the accounts of right action offered by deontology, utilitarianism and virtue ethics. It is possible to defend the action guiding nature of virtue ethics: there are virtue rules and exemplars to guide action. Insights from Aristotle's practical approach to ethics are considered alongside Hursthouse's analysis and it is suggested that virtue ethics is also capable of facilitating action appraisal. It is at the same time acknowledged that approaches to virtue ethics vary widely and that the challenges offered here would be rejected by those who embrace a radical replacement virtue approach.
75 FR 1830 - Final Regulatory Guide: Issuance, Availability
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-13
... review of applications for permits and licenses. RG 5.71, ``Cyber Security Programs for Nuclear... NUCLEAR REGULATORY COMMISSION [NRC-2010-0009] Final Regulatory Guide: Issuance, Availability AGENCY: Nuclear Regulatory Commission. ACTION: Notice of Issuance and Availability of Regulatory Guide...
76 FR 32878 - Draft Regulatory Guide: Issuance, Availability
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-07
...-0129] Draft Regulatory Guide: Issuance, Availability AGENCY: Nuclear Regulatory Commission. ACTION: Notice of Issuance and Availability of Draft Regulatory Guide, DG-1253, ``Preoperational Testing of Emergency Core Cooling Systems for Pressurized-Water Reactors''. FOR FURTHER INFORMATION CONTACT: Mekonen M...
Artist-Teachers' In-Action Mental Models While Teaching Visual Arts
ERIC Educational Resources Information Center
Russo-Zimet, Gila
2017-01-01
Studies have examined the assumption that teachers have previous perceptions, beliefs and knowledge about learning (Cochran-Smith & Villegas, 2015). This study presented the In-Action Mental Model of twenty leading artist-teachers while teaching Visual Arts in three Israeli art institutions of higher Education. Data was collected in two…
Monitoring others' errors: The role of the motor system in early childhood and adulthood.
Meyer, Marlene; Braukmann, Ricarda; Stapel, Janny C; Bekkering, Harold; Hunnius, Sabine
2016-03-01
Previous research demonstrates that from early in life, our cortical sensorimotor areas are activated both when performing and when observing actions (mirroring). Recent findings suggest that the adult motor system is also involved in detecting others' rule violations. Yet, how this translates to everyday action errors (e.g., accidentally dropping something) and how error-sensitive motor activity for others' actions emerges are still unknown. In this study, we examined the role of the motor system in error monitoring. Participants observed successful and unsuccessful pincer grasp actions while their electroencephalography was registered. We tested infants (8- and 14-month-olds) at different stages of learning the pincer grasp and adults as advanced graspers. Power in Alpha- and Beta-frequencies was analysed to assess motor and visual processing. Adults showed enhanced motor activity when observing erroneous actions. However, neither 8- nor 14-month-olds displayed this error sensitivity, despite showing motor activity for both actions. All groups did show similar visual activity, that is more Alpha-suppression, when observing correct actions. Thus, while correct and erroneous actions were processed as visually distinct in all age groups, only the adults' motor system was sensitive to action correctness. Functionality of different brain oscillations in the development of error monitoring and mirroring is discussed. © 2015 The British Psychological Society.
Eye Movements, Visual Search and Scene Memory, in an Immersive Virtual Environment
Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary
2014-01-01
Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency. PMID:24759905
Disappearance of the inversion effect during memory-guided tracking of scrambled biological motion.
Jiang, Changhao; Yue, Guang H; Chen, Tingting; Ding, Jinhong
2016-08-01
The human visual system is highly sensitive to biological motion. Even when a point-light walker is temporarily occluded from view by other objects, our eyes are still able to maintain tracking continuity. To investigate how the visual system establishes a correspondence between the biological-motion stimuli visible before and after the disruption, we used the occlusion paradigm with biological-motion stimuli that were intact or scrambled. The results showed that during visually guided tracking, both the observers' predicted times and predictive smooth pursuit were more accurate for upright biological motion (intact and scrambled) than for inverted biological motion. During memory-guided tracking, however, the processing advantage for upright as compared with inverted biological motion was not found in the scrambled condition, but in the intact condition only. This suggests that spatial location information alone is not sufficient to build and maintain the representational continuity of the biological motion across the occlusion, and that the object identity may act as an important information source in visual tracking. The inversion effect disappeared when the scrambled biological motion was occluded, which indicates that when biological motion is temporarily occluded and there is a complete absence of visual feedback signals, an oculomotor prediction is executed to maintain the tracking continuity, which is established not only by updating the target's spatial location, but also by the retrieval of identity information stored in long-term memory.
Müller, Romy; Helmert, Jens R; Pannasch, Sebastian
2014-10-01
Remote cooperation can be improved by transferring the gaze of one participant to the other. However, based on a partner's gaze, an interpretation of his communicative intention can be difficult. Thus, gaze transfer has been inferior to mouse transfer in remote spatial referencing tasks where locations had to be pointed out explicitly. Given that eye movements serve as an indicator of visual attention, it remains to be investigated whether gaze and mouse transfer differentially affect the coordination of joint action when the situation demands an understanding of the partner's search strategies. In the present study, a gaze or mouse cursor was transferred from a searcher to an assistant in a hierarchical decision task. The assistant could use this cursor to guide his movement of a window which continuously opened up the display parts the searcher needed to find the right solution. In this context, we investigated how the ease of using gaze transfer depended on whether a link could be established between the partner's eye movements and the objects he was looking at. Therefore, in addition to the searcher's cursor, the assistant either saw the positions of these objects or only a grey background. When the objects were visible, performance and the number of spoken words were similar for gaze and mouse transfer. However, without them, gaze transfer resulted in longer solution times and more verbal effort as participants relied more strongly on speech to coordinate the window movement. Moreover, an analysis of the spatio-temporal coupling of the transmitted cursor and the window indicated that when no visual object information was available, assistants confidently followed the searcher's mouse but not his gaze cursor. Once again, the results highlight the importance of carefully considering task characteristics when applying gaze transfer in remote cooperation. Copyright © 2013 Elsevier B.V. All rights reserved.
Adolescent Development of Value-Guided Goal Pursuit.
Davidow, Juliet Y; Insel, Catherine; Somerville, Leah H
2018-06-04
Adolescents are challenged to orchestrate goal-directed actions in increasingly independent and consequential ways. In doing so, it is advantageous to use information about value to select which goals to pursue and how much effort to devote to them. Here, we examine age-related changes in how individuals use value signals to orchestrate goal-directed behavior. Drawing on emerging literature on value-guided cognitive control and reinforcement learning, we demonstrate how value and task difficulty modulate the execution of goal-directed action in complex ways across development from childhood to adulthood. We propose that the scope of value-guided goal pursuit expands with age to include increasingly challenging cognitive demands, and scaffolds on the emergence of functional integration within brain networks supporting valuation, cognition, and action. Copyright © 2018 Elsevier Ltd. All rights reserved.
Self-Study and Evaluation Guide/1968 Edition. Section D-3: Rehabilitation Centers.
ERIC Educational Resources Information Center
National Accreditation Council for Agencies Serving the Blind and Visually Handicapped, New York, NY.
This self-study and evaluation guide on rehabilitation centers is one of 28 guides designed for organizations undertaking a self-study as part of the process for accreditation from the National Accreditation Council (NAC) for agencies serving the blind and visually handicapped. Provided are lists of standards to be appraised by the self-evaluation…
Self-Study and Evaluation Guide/1979 Edition. Section B-1: Agency Profile.
ERIC Educational Resources Information Center
National Accreditation Council for Agencies Serving the Blind and Visually Handicapped, New York, NY.
This guide on developing an agency profile is one of 28 guides designed for organizations serving the blind and the visually handicapped who are undertaking a self-study as part of the process for accreditation by the National Accreditation Council (NAC). Instructions for preparing a packet of informative data and material for advance study by…
Self-Study and Evaluation Guide/1977 Edition. Section D-8: Rehabilitation Teaching Services.
ERIC Educational Resources Information Center
National Accreditation Council for Agencies Serving the Blind and Visually Handicapped, New York, NY.
This self-study and evaluation guide on rehabilitation teaching services is one of 28 guides designed for organizations who are undertaking a self-study as part of the process for accreditation from the National Accreditation Council (NAC) for agencies serving the blind and visually handicapped. Provided are lists of standards to be appraised by…
Self-Study and Evaluation Guide [1976 Edition]. Section D-4: Workshop Services.
ERIC Educational Resources Information Center
National Accreditation Council for Agencies Serving the Blind and Visually Handicapped, New York, NY.
This self-study and evaluation guide on workshop service is one of twenty-eight guides designed for organizations who are undertaking a self-study as part of the process for accreditation from the National Accreditation Council (NAC) for agencies serving the blind and visually handicapped. Provided are lists of standards to be appraised by the…
Self-Study and Evaluation Guide/[1975 Edition]. Section D-6: Vocational Services.
ERIC Educational Resources Information Center
National Accreditation Council for Agencies Serving the Blind and Visually Handicapped, New York, NY.
This self-study and evaluation guide on vocational services is one of 28 guides designed for organizations who are undertaking a self-study as part of the process for accreditation from the National Accreditation Council (NAC) for agencies serving the blind and visually handicapped. Provided are lists of standards to be appraised by the…
ERIC Educational Resources Information Center
Sung, Y.-T.; Hou, H.-T.; Liu, C.-K.; Chang, K.-E.
2010-01-01
Mobile devices have been increasingly utilized in informal learning because of their high degree of portability; mobile guide systems (or electronic guidebooks) have also been adopted in museum learning, including those that combine learning strategies and the general audio-visual guide systems. To gain a deeper understanding of the features and…
Fiore, Vincenzo G; Kottler, Benjamin; Gu, Xiaosi; Hirth, Frank
2017-01-01
The central complex in the insect brain is a composite of midline neuropils involved in processing sensory cues and mediating behavioral outputs to orchestrate spatial navigation. Despite recent advances, however, the neural mechanisms underlying sensory integration and motor action selections have remained largely elusive. In particular, it is not yet understood how the central complex exploits sensory inputs to realize motor functions associated with spatial navigation. Here we report an in silico interrogation of central complex-mediated spatial navigation with a special emphasis on the ellipsoid body. Based on known connectivity and function, we developed a computational model to test how the local connectome of the central complex can mediate sensorimotor integration to guide different forms of behavioral outputs. Our simulations show integration of multiple sensory sources can be effectively performed in the ellipsoid body. This processed information is used to trigger continuous sequences of action selections resulting in self-motion, obstacle avoidance and the navigation of simulated environments of varying complexity. The motor responses to perceived sensory stimuli can be stored in the neural structure of the central complex to simulate navigation relying on a collective of guidance cues, akin to sensory-driven innate or habitual behaviors. By comparing behaviors under different conditions of accessible sources of input information, we show the simulated insect computes visual inputs and body posture to estimate its position in space. Finally, we tested whether the local connectome of the central complex might also allow the flexibility required to recall an intentional behavioral sequence, among different courses of actions. Our simulations suggest that the central complex can encode combined representations of motor and spatial information to pursue a goal and thus successfully guide orientation behavior. Together, the observed computational features identify central complex circuitry, and especially the ellipsoid body, as a key neural correlate involved in spatial navigation.
Fiore, Vincenzo G.; Kottler, Benjamin; Gu, Xiaosi; Hirth, Frank
2017-01-01
The central complex in the insect brain is a composite of midline neuropils involved in processing sensory cues and mediating behavioral outputs to orchestrate spatial navigation. Despite recent advances, however, the neural mechanisms underlying sensory integration and motor action selections have remained largely elusive. In particular, it is not yet understood how the central complex exploits sensory inputs to realize motor functions associated with spatial navigation. Here we report an in silico interrogation of central complex-mediated spatial navigation with a special emphasis on the ellipsoid body. Based on known connectivity and function, we developed a computational model to test how the local connectome of the central complex can mediate sensorimotor integration to guide different forms of behavioral outputs. Our simulations show integration of multiple sensory sources can be effectively performed in the ellipsoid body. This processed information is used to trigger continuous sequences of action selections resulting in self-motion, obstacle avoidance and the navigation of simulated environments of varying complexity. The motor responses to perceived sensory stimuli can be stored in the neural structure of the central complex to simulate navigation relying on a collective of guidance cues, akin to sensory-driven innate or habitual behaviors. By comparing behaviors under different conditions of accessible sources of input information, we show the simulated insect computes visual inputs and body posture to estimate its position in space. Finally, we tested whether the local connectome of the central complex might also allow the flexibility required to recall an intentional behavioral sequence, among different courses of actions. Our simulations suggest that the central complex can encode combined representations of motor and spatial information to pursue a goal and thus successfully guide orientation behavior. Together, the observed computational features identify central complex circuitry, and especially the ellipsoid body, as a key neural correlate involved in spatial navigation. PMID:28824390
Vision for navigation: What can we learn from ants?
Graham, Paul; Philippides, Andrew
2017-09-01
The visual systems of all animals are used to provide information that can guide behaviour. In some cases insects demonstrate particularly impressive visually-guided behaviour and then we might reasonably ask how the low-resolution vision and limited neural resources of insects are tuned to particular behavioural strategies. Such questions are of interest to both biologists and to engineers seeking to emulate insect-level performance with lightweight hardware. One behaviour that insects share with many animals is the use of learnt visual information for navigation. Desert ants, in particular, are expert visual navigators. Across their foraging life, ants can learn long idiosyncratic foraging routes. What's more, these routes are learnt quickly and the visual cues that define them can be implemented for guidance independently of other social or personal information. Here we review the style of visual navigation in solitary foraging ants and consider the physiological mechanisms that underpin it. Our perspective is to consider that robust navigation comes from the optimal interaction between behavioural strategy, visual mechanisms and neural hardware. We consider each of these in turn, highlighting the value of ant-like mechanisms in biomimetic endeavours. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
3D Scientific Visualization with Blender
NASA Astrophysics Data System (ADS)
Kent, Brian R.
2015-03-01
This is the first book written on using Blender (an open source visualization suite widely used in the entertainment and gaming industries) for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.
Using spoken words to guide open-ended category formation.
Chauhan, Aneesh; Seabra Lopes, Luís
2011-11-01
Naming is a powerful cognitive tool that facilitates categorization by forming an association between words and their referents. There is evidence in child development literature that strong links exist between early word-learning and conceptual development. A growing view is also emerging that language is a cultural product created and acquired through social interactions. Inspired by these studies, this paper presents a novel learning architecture for category formation and vocabulary acquisition in robots through active interaction with humans. This architecture is open-ended and is capable of acquiring new categories and category names incrementally. The process can be compared to language grounding in children at single-word stage. The robot is embodied with visual and auditory sensors for world perception. A human instructor uses speech to teach the robot the names of the objects present in a visually shared environment. The robot uses its perceptual input to ground these spoken words and dynamically form/organize category descriptions in order to achieve better categorization. To evaluate the learning system at word-learning and category formation tasks, two experiments were conducted using a simple language game involving naming and corrective feedback actions from the human user. The obtained results are presented and discussed in detail.
The Potential of Deweyan-Inspired Action Research
ERIC Educational Resources Information Center
Stark, Jody L.
2014-01-01
In its broadest sense, pragmatism could be said to be the philosophical orientation of all action research. Action research is characterized by research, action, and participation grounded in democratic principles and guided by the aim of social improvement. Furthermore, action research is an active process of inquiry that does not admit…
Franceschini, Sandro; Trevisan, Piergiorgio; Ronconi, Luca; Bertoni, Sara; Colmar, Susan; Double, Kit; Facoetti, Andrea; Gori, Simone
2017-07-19
Dyslexia is characterized by difficulties in learning to read and there is some evidence that action video games (AVG), without any direct phonological or orthographic stimulation, improve reading efficiency in Italian children with dyslexia. However, the cognitive mechanism underlying this improvement and the extent to which the benefits of AVG training would generalize to deep English orthography, remain two critical questions. During reading acquisition, children have to integrate written letters with speech sounds, rapidly shifting their attention from visual to auditory modality. In our study, we tested reading skills and phonological working memory, visuo-spatial attention, auditory, visual and audio-visual stimuli localization, and cross-sensory attentional shifting in two matched groups of English-speaking children with dyslexia before and after they played AVG or non-action video games. The speed of words recognition and phonological decoding increased after playing AVG, but not non-action video games. Furthermore, focused visuo-spatial attention and visual-to-auditory attentional shifting also improved only after AVG training. This unconventional reading remediation program also increased phonological short-term memory and phoneme blending skills. Our report shows that an enhancement of visuo-spatial attention and phonological working memory, and an acceleration of visual-to-auditory attentional shifting can directly translate into better reading in English-speaking children with dyslexia.
El Darawany, Hamed; Barakat, Alaa; Madi, Maha Al; Aldamanhori, Reem; Al Otaibi, Khalid; Al-Zahrani, Ali A
2016-01-01
Inserting a guide wire is a common practice during endo-urological procedures. A rare complication in patients with ureteral stones where an iatrogenic submucosal tunnel (IST) is created during endoscopic guide wire placement. Summarize data on IST. Retrospective descriptive study of patients treated from from October 2009 until January 2015. King Fahd Hospital of the University, Al-Khobar, Saudi Arabia. Patients with ureteral stones were divided to 2 groups. In group I (335 patients), the ureteral stones were removed by ureteroscopy in one stage. Group II (97 patients) had a 2-staged procedure starting with a double J-stent placement for kidney drainage followed within 3 weeks with ureteroscopic stone removal. Endoscopic visualization of ureteric submucosal tunneling by guide wire. IST occurred in 9/432 patients with ureteral stones (2.1%). The diagnosis in group I was made during ureteroscopy by direct visualization of a vanishing guide wire at the level of the stone (6 patients). In group II, IST was suspected when renal pain was not relieved after placement of the double J-stent or if imaging by ultrasound or intravenous urography showed persistent back pressure to the obstructed kidney (3 patients). The condition was subsequently confirmed by ureteroscopy. Forceful advancement of the guide wire in an inflamed and edematous ureteral segment impacted by a stone is probably the triggering factor for development of IST. Definitive diagnosis is possible only by direct visualization during ureteroscopy. Awareness of this potential complication is important to guard against its occurrence. Relatively small numbers of subjects and the retrospective nature of the study.
Neural theory for the perception of causal actions.
Fleischer, Falk; Christensen, Andrea; Caggiano, Vittorio; Thier, Peter; Giese, Martin A
2012-07-01
The efficient prediction of the behavior of others requires the recognition of their actions and an understanding of their action goals. In humans, this process is fast and extremely robust, as demonstrated by classical experiments showing that human observers reliably judge causal relationships and attribute interactive social behavior to strongly simplified stimuli consisting of simple moving geometrical shapes. While psychophysical experiments have identified critical visual features that determine the perception of causality and agency from such stimuli, the underlying detailed neural mechanisms remain largely unclear, and it is an open question why humans developed this advanced visual capability at all. We created pairs of naturalistic and abstract stimuli of hand actions that were exactly matched in terms of their motion parameters. We show that varying critical stimulus parameters for both stimulus types leads to very similar modulations of the perception of causality. However, the additional form information about the hand shape and its relationship with the object supports more fine-grained distinctions for the naturalistic stimuli. Moreover, we show that a physiologically plausible model for the recognition of goal-directed hand actions reproduces the observed dependencies of causality perception on critical stimulus parameters. These results support the hypothesis that selectivity for abstract action stimuli might emerge from the same neural mechanisms that underlie the visual processing of natural goal-directed action stimuli. Furthermore, the model proposes specific detailed neural circuits underlying this visual function, which can be evaluated in future experiments.
Priming and the guidance by visual and categorical templates in visual search.
Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L
2014-01-01
Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.
Search guidance is proportional to the categorical specificity of a target cue.
Schmidt, Joseph; Zelinsky, Gregory J
2009-10-01
Visual search studies typically assume the availability of precise target information to guide search, often a picture of the exact target. However, search targets in the real world are often defined categorically and with varying degrees of visual specificity. In five target preview conditions we manipulated the availability of target visual information in a search task for common real-world objects. Previews were: a picture of the target, an abstract textual description of the target, a precise textual description, an abstract + colour textual description, or a precise + colour textual description. Guidance generally increased as information was added to the target preview. We conclude that the information used for search guidance need not be limited to a picture of the target. Although generally less precise, to the extent that visual information can be extracted from a target label and loaded into working memory, this information too can be used to guide search.
76 FR 31382 - Notice of Issuance of Regulatory Guide
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-31
... NUCLEAR REGULATORY COMMISSION [NRC-2010-0287] Notice of Issuance of Regulatory Guide AGENCY: Nuclear Regulatory Commission. ACTION: Notice of Issuance and Availability of Regulatory Guide 8.2, Revision 1, ``Administrative Practices in Radiation Surveys and Monitoring.'' FOR FURTHER INFORMATION...
Screening Algorithm to Guide Decisions on Whether to Conduct a Health Impact Assessment
Provides a visual aid in the form of a decision algorithm that helps guide discussions about whether to proceed with an HIA. The algorithm can help structure, standardize, and document the decision process.
Vision drives accurate approach behavior during prey capture in laboratory mice
Hoy, Jennifer L.; Yavorska, Iryna; Wehr, Michael; Niell, Cristopher M.
2016-01-01
Summary The ability to genetically identify and manipulate neural circuits in the mouse is rapidly advancing our understanding of visual processing in the mammalian brain [1,2]. However, studies investigating the circuitry that underlies complex ethologically-relevant visual behaviors in the mouse have been primarily restricted to fear responses [3–5]. Here, we show that a laboratory strain of mouse (Mus musculus, C57BL/6J) robustly pursues, captures and consumes live insect prey, and that vision is necessary for mice to perform the accurate orienting and approach behaviors leading to capture. Specifically, we differentially perturbed visual or auditory input in mice and determined that visual input is required for accurate approach, allowing maintenance of bearing to within 11 degrees of the target on average during pursuit. While mice were able to capture prey without vision, the accuracy of their approaches and capture rate dramatically declined. To better explore the contribution of vision to this behavior, we developed a simple assay that isolated visual cues and simplified analysis of the visually guided approach. Together, our results demonstrate that laboratory mice are capable of exhibiting dynamic and accurate visually-guided approach behaviors, and provide a means to estimate the visual features that drive behavior within an ethological context. PMID:27773567
The contributions of vision and haptics to reaching and grasping
Stone, Kayla D.; Gonzalez, Claudia L. R.
2015-01-01
This review aims to provide a comprehensive outlook on the sensory (visual and haptic) contributions to reaching and grasping. The focus is on studies in developing children, normal, and neuropsychological populations, and in sensory-deprived individuals. Studies have suggested a right-hand/left-hemisphere specialization for visually guided grasping and a left-hand/right-hemisphere specialization for haptically guided object recognition. This poses the interesting possibility that when vision is not available and grasping relies heavily on the haptic system, there is an advantage to use the left hand. We review the evidence for this possibility and dissect the unique contributions of the visual and haptic systems to grasping. We ultimately discuss how the integration of these two sensory modalities shape hand preference. PMID:26441777
Sidarus, Nura; Vuorre, Matti; Metcalfe, Janet; Haggard, Patrick
2017-01-01
How do we know how much control we have over our environment? The sense of agency refers to the feeling that we are in control of our actions, and that, through them, we can control our external environment. Thus, agency clearly involves matching intentions, actions, and outcomes. The present studies investigated the possibility that processes of action selection, i.e., choosing what action to make, contribute to the sense of agency. Since selection of action necessarily precedes execution of action, such effects must be prospective. In contrast, most literature on sense of agency has focussed on the retrospective computation whether an outcome fits the action performed or intended. This hypothesis was tested in an ecologically rich, dynamic task based on a computer game. Across three experiments, we manipulated three different aspects of action selection processing: visual processing fluency, categorization ambiguity, and response conflict. Additionally, we measured the relative contributions of prospective, action selection-based cues, and retrospective, outcome-based cues to the sense of agency. Manipulations of action selection were orthogonally combined with discrepancy of visual feedback of action. Fluency of action selection had a small but reliable effect on the sense of agency. Additionally, as expected, sense of agency was strongly reduced when visual feedback was discrepant with the action performed. The effects of discrepant feedback were larger than the effects of action selection fluency, and sometimes suppressed them. The sense of agency is highly sensitive to disruptions of action-outcome relations. However, when motor control is successful, and action-outcome relations are as predicted, fluency or dysfluency of action selection provides an important prospective cue to the sense of agency.
Sidarus, Nura; Vuorre, Matti; Metcalfe, Janet; Haggard, Patrick
2017-01-01
How do we know how much control we have over our environment? The sense of agency refers to the feeling that we are in control of our actions, and that, through them, we can control our external environment. Thus, agency clearly involves matching intentions, actions, and outcomes. The present studies investigated the possibility that processes of action selection, i.e., choosing what action to make, contribute to the sense of agency. Since selection of action necessarily precedes execution of action, such effects must be prospective. In contrast, most literature on sense of agency has focussed on the retrospective computation whether an outcome fits the action performed or intended. This hypothesis was tested in an ecologically rich, dynamic task based on a computer game. Across three experiments, we manipulated three different aspects of action selection processing: visual processing fluency, categorization ambiguity, and response conflict. Additionally, we measured the relative contributions of prospective, action selection-based cues, and retrospective, outcome-based cues to the sense of agency. Manipulations of action selection were orthogonally combined with discrepancy of visual feedback of action. Fluency of action selection had a small but reliable effect on the sense of agency. Additionally, as expected, sense of agency was strongly reduced when visual feedback was discrepant with the action performed. The effects of discrepant feedback were larger than the effects of action selection fluency, and sometimes suppressed them. The sense of agency is highly sensitive to disruptions of action-outcome relations. However, when motor control is successful, and action-outcome relations are as predicted, fluency or dysfluency of action selection provides an important prospective cue to the sense of agency. PMID:28450839
Assessing Changes in Job Behavior Due to Training: A Guide to the Participant Action Plan Approach.
ERIC Educational Resources Information Center
Office of Personnel Management, Washington, DC.
This guide provides a brief introduction to the Participant Action Plan Approach (PAPA) and a user's handbook. Part I outlines five steps of PAPA which determine how job behavior is changed by training course or program participation. Part II, the manual, is arranged by the five steps of the PAPA approach. Planning for PAPA discusses making…
ERIC Educational Resources Information Center
Wisconsin Univ., Madison. Coll. of Agricultural and Life Sciences.
Educators of students grades 4-8 can use this guide to lead a community service project using the "Give Water a Hand" youth action program. Youth groups investigate water and water conservation within the home, farm, ranch, school, or community, with the help of local experts. The guide contains six chapters that cover: (1) an…
Acid Rain: Federal Policy Action 1983-1985. A Guide to Government Documents and Commercial Sources.
ERIC Educational Resources Information Center
Lovenburg, Susan, Comp.
The problems associated with acid rain as well as strategies on what to do and how to do it are addressed in this resource guide. The first section identifies and describes the U.S. agencies and congressional committees which play a role in acid rain research, legislation, and regulation. Actions already taken by the executive and legislative…
Video-Game Play Induces Plasticity in the Visual System of Adults with Amblyopia
Li, Roger W.; Ngo, Charlie; Nguyen, Jennie; Levi, Dennis M.
2011-01-01
Abnormal visual experience during a sensitive period of development disrupts neuronal circuitry in the visual cortex and results in abnormal spatial vision or amblyopia. Here we examined whether playing video games can induce plasticity in the visual system of adults with amblyopia. Specifically 20 adults with amblyopia (age 15–61 y; visual acuity: 20/25–20/480, with no manifest ocular disease or nystagmus) were recruited and allocated into three intervention groups: action videogame group (n = 10), non-action videogame group (n = 3), and crossover control group (n = 7). Our experiments show that playing video games (both action and non-action games) for a short period of time (40–80 h, 2 h/d) using the amblyopic eye results in a substantial improvement in a wide range of fundamental visual functions, from low-level to high-level, including visual acuity (33%), positional acuity (16%), spatial attention (37%), and stereopsis (54%). Using a cross-over experimental design (first 20 h: occlusion therapy, and the next 40 h: videogame therapy), we can conclude that the improvement cannot be explained simply by eye patching alone. We quantified the limits and the time course of visual plasticity induced by video-game experience. The recovery in visual acuity that we observed is at least 5-fold faster than would be expected from occlusion therapy in childhood amblyopia. We used positional noise and modelling to reveal the neural mechanisms underlying the visual improvements in terms of decreased spatial distortion (7%) and increased processing efficiency (33%). Our study had several limitations: small sample size, lack of randomization, and differences in numbers between groups. A large-scale randomized clinical study is needed to confirm the therapeutic value of video-game treatment in clinical situations. Nonetheless, taken as a pilot study, this work suggests that video-game play may provide important principles for treating amblyopia, and perhaps other cortical dysfunctions. Trial Registration ClinicalTrials.gov NCT01223716 PMID:21912514
Video-game play induces plasticity in the visual system of adults with amblyopia.
Li, Roger W; Ngo, Charlie; Nguyen, Jennie; Levi, Dennis M
2011-08-01
Abnormal visual experience during a sensitive period of development disrupts neuronal circuitry in the visual cortex and results in abnormal spatial vision or amblyopia. Here we examined whether playing video games can induce plasticity in the visual system of adults with amblyopia. Specifically 20 adults with amblyopia (age 15-61 y; visual acuity: 20/25-20/480, with no manifest ocular disease or nystagmus) were recruited and allocated into three intervention groups: action videogame group (n = 10), non-action videogame group (n = 3), and crossover control group (n = 7). Our experiments show that playing video games (both action and non-action games) for a short period of time (40-80 h, 2 h/d) using the amblyopic eye results in a substantial improvement in a wide range of fundamental visual functions, from low-level to high-level, including visual acuity (33%), positional acuity (16%), spatial attention (37%), and stereopsis (54%). Using a cross-over experimental design (first 20 h: occlusion therapy, and the next 40 h: videogame therapy), we can conclude that the improvement cannot be explained simply by eye patching alone. We quantified the limits and the time course of visual plasticity induced by video-game experience. The recovery in visual acuity that we observed is at least 5-fold faster than would be expected from occlusion therapy in childhood amblyopia. We used positional noise and modelling to reveal the neural mechanisms underlying the visual improvements in terms of decreased spatial distortion (7%) and increased processing efficiency (33%). Our study had several limitations: small sample size, lack of randomization, and differences in numbers between groups. A large-scale randomized clinical study is needed to confirm the therapeutic value of video-game treatment in clinical situations. Nonetheless, taken as a pilot study, this work suggests that video-game play may provide important principles for treating amblyopia, and perhaps other cortical dysfunctions. ClinicalTrials.gov NCT01223716.
Spatial Context and Visual Perception for Action
ERIC Educational Resources Information Center
Coello, Yann
2005-01-01
In this paper, evidences that visuo-spatial perception in the peri-personal space is not an abstract, disembodied phenomenon but is rather shaped by action constraints are reviewed. Locating a visual target with the intention of reaching it requires that the relevant spatial information is considered in relation with the body-part that will be…
Memory-guided force control in healthy younger and older adults.
Neely, Kristina A; Samimy, Shaadee; Blouch, Samantha L; Wang, Peiyuan; Chennavasin, Amanda; Diaz, Michele T; Dennis, Nancy A
2017-08-01
Successful performance of a memory-guided motor task requires participants to store and then recall an accurate representation of the motor goal. Further, participants must monitor motor output to make adjustments in the absence of visual feedback. The goal of this study was to examine memory-guided grip force in healthy younger and older adults and compare it to performance on behavioral tasks of working memory. Previous work demonstrates that healthy adults decrease force output as a function of time when visual feedback is not available. We hypothesized that older adults would decrease force output at a faster rate than younger adults, due to age-related deficits in working memory. Two groups of participants, younger adults (YA: N = 32, mean age 21.5 years) and older adults (OA: N = 33, mean age 69.3 years), completed four 20-s trials of isometric force with their index finger and thumb, equal to 25% of their maximum voluntary contraction. In the full-vision condition, visual feedback was available for the duration of the trial. In the no vision condition, visual feedback was removed for the last 12 s of each trial. Participants were asked to maintain constant force output in the absence of visual feedback. Participants also completed tasks of word recall and recognition and visuospatial working memory. Counter to our predictions, when visual feedback was removed, younger adults decreased force at a faster rate compared to older adults and the rate of decay was not associated with behavioral performance on tests of working memory.
Fradcourt, B; Peyrin, C; Baciu, M; Campagne, A
2013-10-01
Previous studies performed on visual processing of emotional stimuli have revealed preference for a specific type of visual spatial frequencies (high spatial frequency, HSF; low spatial frequency, LSF) according to task demands. The majority of studies used a face and focused on the appraisal of the emotional state of others. The present behavioral study investigates the relative role of spatial frequencies on processing emotional natural scenes during two explicit cognitive appraisal tasks, one emotional, based on the self-emotional experience and one motivational, based on the tendency to action. Our results suggest that HSF information was the most relevant to rapidly identify the self-emotional experience (unpleasant, pleasant, and neutral) while LSF was required to rapidly identify the tendency to action (avoidance, approach, and no action). The tendency to action based on LSF analysis showed a priority for unpleasant stimuli whereas the identification of emotional experience based on HSF analysis showed a priority for pleasant stimuli. The present study confirms the interest of considering both emotional and motivational characteristics of visual stimuli. Copyright © 2013 Elsevier Inc. All rights reserved.
Typical Neural Representations of Action Verbs Develop without Vision
Caramazza, A.; Pascual-Leone, A.; Saxe, R.
2012-01-01
Many empiricist theories hold that concepts are composed of sensory–motor primitives. For example, the meaning of the word “run” is in part a visual image of running. If action concepts are partly visual, then the concepts of congenitally blind individuals should be altered in that they lack these visual features. We compared semantic judgments and neural activity during action verb comprehension in congenitally blind and sighted individuals. Participants made similarity judgments about pairs of nouns and verbs that varied in the visual motion they conveyed. Blind adults showed the same pattern of similarity judgments as sighted adults. We identified the left middle temporal gyrus (lMTG) brain region that putatively stores visual–motion features relevant to action verbs. The functional profile and location of this region was identical in sighted and congenitally blind individuals. Furthermore, the lMTG was more active for all verbs than nouns, irrespective of visual–motion features. We conclude that the lMTG contains abstract representations of verb meanings rather than visual–motion images. Our data suggest that conceptual brain regions are not altered by the sensory modality of learning. PMID:21653285
Move with Me: A Parents' Guide to Movement Development for Visually Impaired Babies.
ERIC Educational Resources Information Center
Blind Childrens Center, Los Angeles, CA.
This booklet presents suggestions for parents to promote their visually impaired infant's motor development. It is pointed out that babies with serious visual loss often prefer their world to be constant and familiar and may resist change (including change in position); therefore, it is important that a wide range of movement activities be…
Studies of Visual Attention in Physics Problem Solving
ERIC Educational Resources Information Center
Madsen, Adrian M.
2013-01-01
The work described here represents an effort to understand and influence visual attention while solving physics problems containing a diagram. Our visual system is guided by two types of processes--top-down and bottom-up. The top-down processes are internal and determined by ones prior knowledge and goals. The bottom-up processes are external and…
ERIC Educational Resources Information Center
Kapperman, Gaylen; Kelly, Stacy M.
2013-01-01
Individuals with visual impairments (that is, those who are blind or have low vision) do not have the same opportunities to develop their knowledge of sexual health and participate in sex education as their sighted peers (Krupa & Esmail, 2010), although young adults with visual impairments participate in sexual activities at similar rates as their…
Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search
ERIC Educational Resources Information Center
Calvo, Manuel G.; Nummenmaa, Lauri
2008-01-01
In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…
77 FR 38338 - Proposal and Award Policies and Procedures Guide; Comments Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-27
... NATIONAL SCIENCE FOUNDATION Proposal and Award Policies and Procedures Guide; Comments Request AGENCY: National Science Foundation. ACTION: Notification of extension of public comment period. SUMMARY... on the National Science Foundation Proposal and Award Policies and Procedures Guide. The original...
75 FR 18241 - Draft Regulatory Guide: Issuance, Availability
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-09
... NUCLEAR REGULATORY COMMISSION [NRC-2010-0148] Draft Regulatory Guide: Issuance, Availability AGENCY: Nuclear Regulatory Commission. ACTION: Notice of Issuance and Availability of Draft Regulatory.... Introduction The U.S. Nuclear Regulatory Commission (NRC) is issuing for public comment a draft guide in the...
75 FR 45166 - Draft Regulatory Guide: Issuance, Availability
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-02
... NUCLEAR REGULATORY COMMISSION [NRC-2010-0265] Draft Regulatory Guide: Issuance, Availability AGENCY: Nuclear Regulatory Commission. ACTION: Notice of Issuance and Availability of Draft Regulatory.... Introduction The U.S. Nuclear Regulatory Commission (NRC) is issuing for public comment a draft guide in the...
76 FR 6086 - Draft Regulatory Guide: Issuance, Availability
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-03
... NUCLEAR REGULATORY COMMISSION 10 CFR Part 73 [NRC-2011-0015] RIN 3150-AI49 Draft Regulatory Guide: Issuance, Availability AGENCY: Nuclear Regulatory Commission. ACTION: Notice of Availability of Draft... comment Draft Regulatory Guide, DG-5020, ``Applying for Enhanced Weapons Authority, Applying for...
Self-Study and Evaluation Guide/1968 Edition. Section D-5: Social Services. (Revised 1977).
ERIC Educational Resources Information Center
National Accreditation Council for Agencies Serving the Blind and Visually Handicapped, New York, NY.
This self-study and evaluation guide on social services is one of twenty-eight guides designed for organizations who are undertaking a self-study as part of the process for accreditation from the National Accreditation Council (NAC) for agencies serving the blind and visually handicapped. Provided are lists of standards to be appraised by the…
Self-Study and Evaluation Guide/1977 Edition. Section D-2A: Orientation and Mobility Services.
ERIC Educational Resources Information Center
National Accreditation Council for Agencies Serving the Blind and Visually Handicapped, New York, NY.
This self-study and evaluation guide on orientation and mobility services is one of 28 guides designed for organizations undertaking a self-study as part of the process for accreditation from the National Accreditation Council (NAC) for agencies serving the blind and visually handicapped. Provided are lists of standards to be appraised by the…
An evaluation of the experiences of guide dog owners visiting Scottish veterinary practices.
Fraser, M; Girling, S J
2016-09-10
Guide dogs and their owners will visit a veterinary practice at least twice a year. The aim of this study was to evaluate what guide dog owners thought about these visits, in order to identify areas of good practice which could be incorporated into the undergraduate curriculum. Nine guide dog owners volunteered to take part in the study and were interviewed by the primary researcher. Thematic analysis was carried out and several themes were identified: good experiences were highlighted where staff had an understanding of visual impairment and the work of a guide dog; the importance of good communication skills involving the owner in the consultation; the need for veterinary professionals to understand the bond between an owner and guide dog; how medication and information could be provided in a user-friendly format for someone affected by a visual impairment and concerns about costs and decision making for veterinary treatment. This work highlights the importance for veterinary staff to talk to, empathise with and understand the individual circumstances of their clients and identifies areas that should be included in veterinary education to better prepare students for the workplace. British Veterinary Association.
Do Endogenous and Exogenous Action Control Compete for Perception?
ERIC Educational Resources Information Center
Pfister, Roland; Heinemann, Alexander; Kiesel, Andrea; Thomaschke, Roland; Janczyk, Markus
2012-01-01
Human actions are guided either by endogenous action plans or by external stimuli in the environment. These two types of action control seem to be mediated by neurophysiologically and functionally distinct systems that interfere if an endogenously planned action suddenly has to be performed in response to an exogenous stimulus. In this case, the…
Kukona, Anuenue; Tabor, Whitney
2011-01-01
The visual world paradigm presents listeners with a challenging problem: they must integrate two disparate signals, the spoken language and the visual context, in support of action (e.g., complex movements of the eyes across a scene). We present Impulse Processing, a dynamical systems approach to incremental eye movements in the visual world that suggests a framework for integrating language, vision, and action generally. Our approach assumes that impulses driven by the language and the visual context impinge minutely on a dynamical landscape of attractors corresponding to the potential eye-movement behaviors of the system. We test three unique predictions of our approach in an empirical study in the visual world paradigm, and describe an implementation in an artificial neural network. We discuss the Impulse Processing framework in relation to other models of the visual world paradigm. PMID:21609355
Reference Collections and Standards.
ERIC Educational Resources Information Center
Winkel, Lois
1999-01-01
Reviews six reference materials for young people: "The New York Public Library Kid's Guide to Research"; "National Audubon Society First Field Guide. Mammals"; "Star Wars: The Visual Dictionary"; "Encarta Africana"; "World Fact Book, 1998"; and "Factastic Book of 1001 Lists". Includes ordering information.(AEF)
Evolution and Optimality of Similar Neural Mechanisms for Perception and Action during Search
Zhang, Sheng; Eckstein, Miguel P.
2010-01-01
A prevailing theory proposes that the brain's two visual pathways, the ventral and dorsal, lead to differing visual processing and world representations for conscious perception than those for action. Others have claimed that perception and action share much of their visual processing. But which of these two neural architectures is favored by evolution? Successful visual search is life-critical and here we investigate the evolution and optimality of neural mechanisms mediating perception and eye movement actions for visual search in natural images. We implement an approximation to the ideal Bayesian searcher with two separate processing streams, one controlling the eye movements and the other stream determining the perceptual search decisions. We virtually evolved the neural mechanisms of the searchers' two separate pathways built from linear combinations of primary visual cortex receptive fields (V1) by making the simulated individuals' probability of survival depend on the perceptual accuracy finding targets in cluttered backgrounds. We find that for a variety of targets, backgrounds, and dependence of target detectability on retinal eccentricity, the mechanisms of the searchers' two processing streams converge to similar representations showing that mismatches in the mechanisms for perception and eye movements lead to suboptimal search. Three exceptions which resulted in partial or no convergence were a case of an organism for which the targets are equally detectable across the retina, an organism with sufficient time to foveate all possible target locations, and a strict two-pathway model with no interconnections and differential pre-filtering based on parvocellular and magnocellular lateral geniculate cell properties. Thus, similar neural mechanisms for perception and eye movement actions during search are optimal and should be expected from the effects of natural selection on an organism with limited time to search for food that is not equi-detectable across its retina and interconnected perception and action neural pathways. PMID:20838589
Hitchcock, Elaine R.; Ferron, John
2017-01-01
Purpose Single-case experimental designs are widely used to study interventions for communication disorders. Traditionally, single-case experiments follow a response-guided approach, where design decisions during the study are based on participants' observed patterns of behavior. However, this approach has been criticized for its high rate of Type I error. In masked visual analysis (MVA), response-guided decisions are made by a researcher who is blinded to participants' identities and treatment assignments. MVA also makes it possible to conduct a hypothesis test assessing the significance of treatment effects. Method This tutorial describes the principles of MVA, including both how experiments can be set up and how results can be used for hypothesis testing. We then report a case study showing how MVA was deployed in a multiple-baseline across-subjects study investigating treatment for residual errors affecting rhotics. Strengths and weaknesses of MVA are discussed. Conclusions Given their important role in the evidence base that informs clinical decision making, it is critical for single-case experimental studies to be conducted in a way that allows researchers to draw valid inferences. As a method that can increase the rigor of single-case studies while preserving the benefits of a response-guided approach, MVA warrants expanded attention from researchers in communication disorders. PMID:28595354
Byun, Tara McAllister; Hitchcock, Elaine R; Ferron, John
2017-06-10
Single-case experimental designs are widely used to study interventions for communication disorders. Traditionally, single-case experiments follow a response-guided approach, where design decisions during the study are based on participants' observed patterns of behavior. However, this approach has been criticized for its high rate of Type I error. In masked visual analysis (MVA), response-guided decisions are made by a researcher who is blinded to participants' identities and treatment assignments. MVA also makes it possible to conduct a hypothesis test assessing the significance of treatment effects. This tutorial describes the principles of MVA, including both how experiments can be set up and how results can be used for hypothesis testing. We then report a case study showing how MVA was deployed in a multiple-baseline across-subjects study investigating treatment for residual errors affecting rhotics. Strengths and weaknesses of MVA are discussed. Given their important role in the evidence base that informs clinical decision making, it is critical for single-case experimental studies to be conducted in a way that allows researchers to draw valid inferences. As a method that can increase the rigor of single-case studies while preserving the benefits of a response-guided approach, MVA warrants expanded attention from researchers in communication disorders.
5 CFR 250.101 - Standards and requirements for agency personnel actions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... personnel actions. 250.101 Section 250.101 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL MANAGEMENT IN AGENCIES Authority for Personnel Actions in Agencies § 250.101... Personnel Management (OPM), the instructions OPM has published in the Guide to Processing Personnel Actions...
5 CFR 250.101 - Standards and requirements for agency personnel actions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... personnel actions. 250.101 Section 250.101 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL MANAGEMENT IN AGENCIES Authority for Personnel Actions in Agencies § 250.101... Personnel Management (OPM), the instructions OPM has published in the Guide to Processing Personnel Actions...
75 FR 48382 - Draft Regulatory Guide: Issuance, Availability
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-10
... NUCLEAR REGULATORY COMMISSION [NRC-2010-0275] Draft Regulatory Guide: Issuance, Availability AGENCY: Nuclear Regulatory Commission. ACTION: Notice of Issuance and Availability of Draft Regulatory Guide, DG-1228, ``Standard Format and Content of License Termination Plans for Nuclear Power Reactors.'' FOR FURTHER INFORMATION CONTACT: James C....
75 FR 20868 - Notice of Issuance of Regulatory Guide
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-21
...: Nuclear Regulatory Commission. ACTION: Notice of Issuance and Availability of Regulatory Guide 1.68.2... Water-Cooled Nuclear Power Plants.'' FOR FURTHER INFORMATION CONTACT: Mark P. Orr, Regulatory Guide... Shutdown Capability for Water-Cooled Nuclear Power Plants,'' was issued with a temporary identification as...
Visual Scan Adaptation During Repeated Visual Search
2010-01-01
Junge, J. A. (2004). Searching for stimulus-driven shifts of attention. Psychonomic Bulletin & Review , 11, 876–881. Furst, C. J. (1971...search strategies cannot override attentional capture. Psychonomic Bulletin & Review , 11, 65–70. Wolfe, J. M. (1994). Guided search 2.0: A revised model...of visual search. Psychonomic Bulletin & Review , 1, 202–238. Wolfe, J. M. (1998a). Visual search. In H. Pashler (Ed.), Attention (pp. 13–73). East
Operational Symbols: Can a Picture Be Worth a Thousand Words?
1991-04-01
internal visualization, because forms are to visual communication what words are to verbal communication. From a psychological point of view, the process... Visual Communication . Washington, DC: National Education Association, 1960. Bohannan, Anthony G. "C31 In Support of the Land Commander," in Principles...captions guide what is learned from a picture or graphic. 40. John C. Ball and Francis C. Byrnes, ed., Research, Principles, and Practices in Visual
Stenner, Max-Philipp; Bauer, Markus; Haggard, Patrick; Heinze, Hans-Jochen; Dolan, Ray
2014-11-01
The perceived intensity of sensory stimuli is reduced when these stimuli are caused by the observer's actions. This phenomenon is traditionally explained by forward models of sensory action-outcome, which arise from motor processing. Although these forward models critically predict anticipatory modulation of sensory neural processing, neurophysiological evidence for anticipatory modulation is sparse and has not been linked to perceptual data showing sensory attenuation. By combining a psychophysical task involving contrast discrimination with source-level time-frequency analysis of MEG data, we demonstrate that the amplitude of alpha-oscillations in visual cortex is enhanced before the onset of a visual stimulus when the identity and onset of the stimulus are controlled by participants' motor actions. Critically, this prestimulus enhancement of alpha-amplitude is paralleled by psychophysical judgments of a reduced contrast for this stimulus. We suggest that alpha-oscillations in visual cortex preceding self-generated visual stimulation are a likely neurophysiological signature of motor-induced sensory anticipation and mediate sensory attenuation. We discuss our results in relation to proposals that attribute generic inhibitory functions to alpha-oscillations in prioritizing and gating sensory information via top-down control.
What You Need To Know and Do to Truly Leave No Child Behind [R]. An Action Guide.
ERIC Educational Resources Information Center
Children's Defense Fund, Washington, DC.
The mission of the Children's Defense Fund Action Council is to "Leave No Child Behind," to do what is necessary to meet the needs of children and their parents, and to ensure a healthy, safe, fair, and moral start in life for all children. This guide examines the negative impact of the George W. Bush administration's budget and policy…
Blanchfield, Anthony; Hardy, James; Marcora, Samuele
2014-01-01
The psychobiological model of endurance performance proposes that endurance performance is determined by a decision-making process based on perception of effort and potential motivation. Recent research has reported that effort-based decision-making during cognitive tasks can be altered by non-conscious visual cues relating to affect and action. The effects of these non-conscious visual cues on effort and performance during physical tasks are however unknown. We report two experiments investigating the effects of subliminal priming with visual cues related to affect and action on perception of effort and endurance performance. In Experiment 1 thirteen individuals were subliminally primed with happy or sad faces as they cycled to exhaustion in a counterbalanced and randomized crossover design. A paired t-test (happy vs. sad faces) revealed that individuals cycled significantly longer (178 s, p = 0.04) when subliminally primed with happy faces. A 2 × 5 (condition × iso-time) ANOVA also revealed a significant main effect of condition on rating of perceived exertion (RPE) during the time to exhaustion (TTE) test with lower RPE when subjects were subliminally primed with happy faces (p = 0.04). In Experiment 2, a single-subject randomization tests design found that subliminal priming with action words facilitated a significantly longer TTE (399 s, p = 0.04) in comparison to inaction words. Like Experiment 1, this greater TTE was accompanied by a significantly lower RPE (p = 0.03). These experiments are the first to show that subliminal visual cues relating to affect and action can alter perception of effort and endurance performance. Non-conscious visual cues may therefore influence the effort-based decision-making process that is proposed to determine endurance performance. Accordingly, the findings raise notable implications for individuals who may encounter such visual cues during endurance competitions, training, or health related exercise. PMID:25566014
Campagne, Aurélie; Fradcourt, Benoit; Pichat, Cédric; Baciu, Monica; Kauffmann, Louise; Peyrin, Carole
2016-01-01
Visual processing of emotional stimuli critically depends on the type of cognitive appraisal involved. The present fMRI pilot study aimed to investigate the cerebral correlates involved in the visual processing of emotional scenes in two tasks, one emotional, based on the appraisal of personal emotional experience, and the other motivational, based on the appraisal of the tendency to action. Given that the use of spatial frequency information is relatively flexible during the visual processing of emotional stimuli depending on the task's demands, we also explored the effect of the type of spatial frequency in visual stimuli in each task by using emotional scenes filtered in low spatial frequency (LSF) and high spatial frequencies (HSF). Activation was observed in the visual areas of the fusiform gyrus for all emotional scenes in both tasks, and in the amygdala for unpleasant scenes only. The motivational task induced additional activation in frontal motor-related areas (e.g. premotor cortex, SMA) and parietal regions (e.g. superior and inferior parietal lobules). Parietal regions were recruited particularly during the motivational appraisal of approach in response to pleasant scenes. These frontal and parietal activations, respectively, suggest that motor and navigation processes play a specific role in the identification of the tendency to action in the motivational task. Furthermore, activity observed in the motivational task, in response to both pleasant and unpleasant scenes, was significantly greater for HSF than for LSF scenes, suggesting that the tendency to action is driven mainly by the detailed information contained in scenes. Results for the emotional task suggest that spatial frequencies play only a small role in the evaluation of unpleasant and pleasant emotions. Our preliminary study revealed a partial distinction between visual processing of emotional scenes during identification of the tendency to action, and during identification of personal emotional experiences. It also illustrates flexible use of the spatial frequencies contained in scenes depending on their emotional valence and on task demands.
Wavefront-Guided Scleral Lens Correction in Keratoconus
Marsack, Jason D.; Ravikumar, Ayeswarya; Nguyen, Chi; Ticak, Anita; Koenig, Darren E.; Elswick, James D.; Applegate, Raymond A.
2014-01-01
Purpose To examine the performance of state-of-the-art wavefront-guided scleral contact lenses (wfgSCLs) on a sample of keratoconic eyes, with emphasis on performance quantified with visual quality metrics; and to provide a detailed discussion of the process used to design, manufacture and evaluate wfgSCLs. Methods Fourteen eyes of 7 subjects with keratoconus were enrolled and a wfgSCL was designed for each eye. High-contrast visual acuity and visual quality metrics were used to assess the on-eye performance of the lenses. Results The wfgSCL provided statistically lower levels of both lower-order RMS (p < 0.001) and higher-order RMS (p < 0.02) than an intermediate spherical equivalent scleral contact lens. The wfgSCL provided lower levels of lower-order RMS than a normal group of well-corrected observers (p < < 0.001). However, the wfgSCL does not provide less higher-order RMS than the normal group (p = 0.41). Of the 14 eyes studied, 10 successfully reached the exit criteria, achieving residual higher-order root mean square wavefront error (HORMS) less than or within 1 SD of the levels experienced by normal, age-matched subjects. In addition, measures of visual image quality (logVSX, logNS and logLIB) for the 10 eyes were well distributed within the range of values seen in normal eyes. However, visual performance as measured by high contrast acuity did not reach normal, age-matched levels, which is in agreement with prior results associated with the acute application of wavefront correction to KC eyes. Conclusions Wavefront-guided scleral contact lenses are capable of optically compensating for the deleterious effects of higher-order aberration concomitant with the disease, and can provide visual image quality equivalent to that seen in normal eyes. Longer duration studies are needed to assess whether the visual system of the highly aberrated eye wearing a wfgSCL is capable of producing visual performance levels typical of the normal population. PMID:24830371
Multicultural Arts: An Infusion.
ERIC Educational Resources Information Center
Wilderberger, Elizabeth
1991-01-01
Presents two examples from 1990 curriculum guide written for Pullen School. Designed for middle school students, "The Japanese Gardener as Visual Artist" emphasizes nature in aesthetic depictions including architecture, horticulture, and visual arts. Appropriate for primary grades, "Reading/Language Arts: Using Books from the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stattaus, Joerg, E-mail: joerg.stattaus@uni-due.de; Kuehl, Hilmar; Ladd, Susanne
2007-09-15
Purpose. Our study aimed to determine the visibility of small liver lesions during CT-guided biopsy and to assess the influence of lesion visibility on biopsy results. Material and Methods. Fifty patients underwent CT-guided core biopsy of small focal liver lesions (maximum diameter, 3 cm); 38 biopsies were performed using noncontrast CT, and the remaining 12 were contrast-enhanced. Visibility of all lesions was graded on a 4-point-scale (0 = not visible, 1 = poorly visible, 2 = sufficiently visible, 3 = excellently visible) before and during biopsy (with the needle placed adjacent to and within the target lesion). Results. Forty-three biopsiesmore » (86%) yielded diagnostic results, and seven biopsies were false-negative. In noncontrast biopsies, the rate of insufficiently visualized lesions (grades 0-1) increased significantly during the procedure, from 10.5% to 44.7%, due to needle artifacts. This resulted in more (17.6%) false-negative biopsy results compared to lesions with good visualization (4.8%), although this difference lacks statistical significance. Visualization impairment appeared more often with an intercostal or subcostal vs. an epigastric access and with a subcapsular vs. a central lesion location, respectively. With contrast-enhanced biopsy the visibility of hepatic lesions was only temporarily improved, with a risk of complete obscuration in the late phase. Conclusion. In conclusion, visibility of small liver lesions diminished significantly during CT-guided biopsy due to needle artifacts, with a fourfold increased rate of insufficiently visualized lesions and of false-negative histological results. Contrast enhancement did not reveal better results.« less
Guiding the mind's eye: improving communication and vision by external control of the scanpath
NASA Astrophysics Data System (ADS)
Barth, Erhardt; Dorr, Michael; Böhme, Martin; Gegenfurtner, Karl; Martinetz, Thomas
2006-02-01
Larry Stark has emphasised that what we visually perceive is very much determined by the scanpath, i.e. the pattern of eye movements. Inspired by his view, we have studied the implications of the scanpath for visual communication and came up with the idea to not only sense and analyse eye movements, but also guide them by using a special kind of gaze-contingent information display. Our goal is to integrate gaze into visual communication systems by measuring and guiding eye movements. For guidance, we first predict a set of about 10 salient locations. We then change the probability for one of these candidates to be attended: for one candidate the probability is increased, for the others it is decreased. To increase saliency, for example, we add red dots that are displayed very briefly such that they are hardly perceived consciously. To decrease the probability, for example, we locally reduce the temporal frequency content. Again, if performed in a gaze-contingent fashion with low latencies, these manipulations remain unnoticed. Overall, the goal is to find the real-time video transformation minimising the difference between the actual and the desired scanpath without being obtrusive. Applications are in the area of vision-based communication (better control of what information is conveyed) and augmented vision and learning (guide a person's gaze by the gaze of an expert or a computer-vision system). We believe that our research is very much in the spirit of Larry Stark's views on visual perception and the close link between vision research and engineering.
ERIC Educational Resources Information Center
Argyropoulos, Vassilios; Nikolaraizi, Magda; Tsiakali, Thomai; Kountrias, Polychronis; Koutsogiorgou, Sofia-Marina; Martos, Aineias
2014-01-01
This paper highlights the framework and discusses the results of an action research project which aimed to facilitate the adoption of assistive technology devices and specialized software by teachers of students with visual impairment via a digital educational game, developed specifically for this project. The persons involved in this…
Viewing Objects and Planning Actions: On the Potentiation of Grasping Behaviours by Visual Objects
ERIC Educational Resources Information Center
Makris, Stergios; Hadar, Aviad A.; Yarrow, Kielan
2011-01-01
How do humans interact with tools? Gibson (1979) suggested that humans perceive directly what tools afford in terms of meaningful actions. This "affordances" hypothesis implies that visual objects can potentiate motor responses even in the absence of an intention to act. Here we explore the temporal evolution of motor plans afforded by common…
Mizuguchi, N; Nakata, H; Kanosue, K
2016-02-19
To elucidate the neural substrate associated with capabilities for kinesthetic motor imagery of difficult whole-body movements, we measured brain activity during a trial involving both kinesthetic motor imagery and action observation as well as during a trial with action observation alone. Brain activity was assessed with functional magnetic resonance imaging (fMRI). Nineteen participants imagined three types of whole-body movements with the horizontal bar: the giant swing, kip, and chin-up during action observation. No participant had previously tried to perform the giant swing. The vividness of kinesthetic motor imagery as assessed by questionnaire was highest for the chin-up, less for the kip and lowest for the giant swing. Activity in the primary visual cortex (V1) during kinesthetic motor imagery with action observation minus that during action observation alone was significantly greater in the giant swing condition than in the chin-up condition within participants. Across participants, V1 activity of kinesthetic motor imagery of the kip during action observation minus that during action observation alone was negatively correlated with vividness of the kip imagery. These results suggest that activity in V1 is dependent upon the capability of kinesthetic motor imagery for difficult whole-body movements. Since V1 activity is likely related to the creation of a visual image, we speculate that visual motor imagery is recruited unintentionally for the less vivid kinesthetic motor imagery of difficult whole-body movements. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Atkinson, Janette; Braddick, Oliver
2011-01-01
Visual information is believed to be processed through two distinct, yet interacting cortical streams. The ventral stream performs the computations needed for recognition of objects and faces ("what" and "who"?) and the dorsal stream the computations for registering spatial relationships and for controlling visually guided actions ("where" and "how"?). We initially proposed a model of spatial deficits in Williams syndrome (WS) in which visual abilities subserved by the ventral stream, such as face recognition, are relatively well developed (although not necessarily in exactly the same way as in typical development), whereas dorsal-stream functions, such as visuospatial actions, are markedly impaired. Since these initial findings in WS, deficits of motion coherence sensitivity, a dorsal-stream function has been found in other genetic disorders such as Fragile X and autism, and as a consequence of perinatal events (in hemiplegia, perinatal brain anomalies following very premature birth), leading to the proposal of a general "dorsal-stream vulnerability" in many different conditions of abnormal human development. In addition, dorsal-stream systems provide information used in tasks of visuospatial memory and locomotor planning, and these systems are closely coupled to networks for attentional control. We and several other research groups have previously shown deficits of frontal and parietal lobe function in WS individuals for specific attention tasks [e.g., Atkinson, J., Braddick, O., Anker, S., Curran, W., & Andrew, R. (2003). Neurobiological models of visuospatial cognition in children with Williams Syndrome: Measures of dorsal-stream and frontal function. Developmental Neuropsychology, 23(1/2), 141-174.]. We have used the Test of Everyday Attention for Children (TEA-Ch) which aims to attempt to separate components of attention with distinct brain networks (selective attention, sustained attention, and attention control-executive function) testing a group of older children with WS, but this test battery is too demanding for many children and adults with WS. Consequently, we have devised a new set of tests of attention, the Early Childhood Attention Battery (ECAB). This uses similar principles to the TEA-Ch, but adapted for mental ages younger than 6 years. The ECAB shows a distinctive attention profile for WS individuals relative to their overall cognitive development, with relative strength in tasks of sustained attention and poorer performance on tasks of selective attention and executive control. These profiles, and the characteristic developmental courses, also show differences between children with Down's syndrome and WS. This chapter briefly reviews new research findings on WS in these areas, relating the development of brain systems in WS to evidence from neuroimaging in typically developing infants, children born very preterm, and normal adults. The hypothesis of "dorsal-stream(s) vulnerability" which will be discussed includes a number of interlinked brain networks, subserving not only global visual processing and formulation of visuomotor actions but interlinked networks of attention. Copyright © 2011 Elsevier B.V. All rights reserved.
Omission P3 after voluntary action indexes the formation of action-driven prediction.
Kimura, Motohiro; Takeda, Yuji
2018-02-01
When humans frequently experience a certain sensory effect after a certain action, a bidirectional association between neural representations of the action and the sensory effect is rapidly acquired, which enables action-driven prediction of the sensory effect. The present study aimed to test whether or not omission P3, an event-related brain potential (ERP) elicited by the sudden omission of a sensory effect, is sensitive to the formation of action-driven prediction. For this purpose, we examined how omission P3 is affected by the number of possible visual effects. In four separate blocks (1-, 2-, 4-, and 8-stimulus blocks), participants successively pressed a right button at an interval of about 1s. In all blocks, each button press triggered a bar on a display (a bar with square edges, 85%; a bar with round edges, 5%), but occasionally did not (sudden omission of a visual effect, 10%). Participants were required to press a left button when a bar with round edges appeared. In the 1-stimulus block, the orientation of the bar was fixed throughout the block; in the 2-, 4-, and 8-stimulus blocks, the orientation was randomly varied among two, four, and eight possibilities, respectively. Omission P3 in the 1-stimulus block was greater than those in the 2-, 4-, and 8-stimulus blocks; there were no significant differences among the 2-, 4-, and 8-stimulus blocks. This binary pattern nicely fits the limitation in the acquisition of action-effect association; although an association between an action and one visual effect is easily acquired, associations between an action and two or more visual effects cannot be acquired concurrently. Taken together, the present results suggest that omission P3 is highly sensitive to the formation of action-driven prediction. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Ono, Fuminori; Jiang, Yuhong; Kawahara, Jun-ichiro
2005-01-01
Contextual cuing refers to the facilitation of performance in visual search due to the repetition of the same displays. Whereas previous studies have focused on contextual cuing within single-search trials, this study tested whether 1 trial facilitates visual search of the next trial. Participants searched for a T among Ls. In the training phase,…
How a Visual Language of Abstract Shapes Facilitates Cultural and International Border Crossings
ERIC Educational Resources Information Center
Conroy, Arthur Thomas, III
2016-01-01
This article describes a visual language comprised of abstract shapes that has been shown to be effective in communicating prior knowledge between and within members of a small team or group. The visual language includes a set of geometric shapes and rules that guide the construction of the abstract diagrams that are the external representation of…
Effects of shade tab arrangement on the repeatability and accuracy of shade selection.
Yılmaz, Burak; Yuzugullu, Bulem; Cınar, Duygu; Berksun, Semih
2011-06-01
Appropriate and repeatable shade matching using visual shade selection remains a challenge for the restorative dentist. The purpose of this study was to evaluate the effect of different arrangements of a shade guide on the repeatability and accuracy of visual shade selection by restorative dentists. Three Vitapan Classical shade guides were used for shade selection. Seven shade tabs from one shade guide were used as target shades for the testing (A1, A4, B2, B3, C2, C4, and D3); the other 2 guides were used for shade selection by the subjects. One shade guide was arranged according to hue and chroma and the second was arranged according to value. Thirteen male and 22 female restorative dentists were asked to match the target shades using shade guide tabs arranged in the 2 different orders. The sessions were performed twice with each guide in a viewing booth. Collected data were analyzed with Fisher's exact test to compare the accuracy and repeatability of the shade selection (α=.05). There were no significant differences observed in the accuracy or repeatability of the shade selection results obtained with the 2 different arrangements. When the hue/chroma-ordered shade guide was used, 58% of the shade selections were accurate. This ratio was 57.6% when the value-ordered shade guide was used. The observers repeated 55.5% of the selections accurately with the hue/chroma-ordered shade guide and 54.3% with the value-ordered shade guide. The accuracy and repeatability of shade selections by restorative dentists were similar when different arrangements (hue/chroma-ordered and value-ordered) of the Vitapan Classical shade guide were used. Copyright © 2011 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
Graham, N.; Zeman, A.; Young, A.; Patterson, K.; Hodges, J.
1999-01-01
OBJECTIVES—To investigate the roles of visual and tactile information in a dyspraxic patient with corticobasal degeneration (CBD) who showed dramatic facilitation in miming the use of a tool or object when he was given a tool to manipulate; and to study the nature of the praxic and neuropsychological deficits in CBD. METHODS—The subject had clinically diagnosed CBD, and exhibited alien limb behaviour and striking ideomotor dyspraxia. General neuropsychological evaluation focused on constructional and visuospatial abilities, calculation, verbal fluency, episodic and semantic memory, plus spelling and writing because impairments in this domain were presenting complaints. Four experiments assessed the roles of visual and tactile information in the facilitation of motor performance by tools. Experiment 1 evaluated the patient's performance of six limb transitive actions under six conditions: (1) after he described the relevant tool from memory, (2) after he was shown a line drawing of the tool, (3) after he was shown a real exemplar of the tool, (4) after he watched the experimenter perform the action, (5) while he was holding the tool, and (6) immediately after he had performed the action with the tool but with the tool removed from his grasp. Experiment 2 evaluated the use of the same six tools when the patient had tactile but no visual information (while he was blindfolded). Experiments 3 and 4 assessed performance of actions appropriate to the same six tools when the patient had either neutral or inappropriate tactile feedback—that is, while he was holding a non-tool object or a different tool. RESULTS—Miming of tool use was not facilitated by visual input; moreover, lack of visual information in the blindfolded condition did not reduce performance. The principal positive finding was a dramatic facilitation of the patient's ability to demonstrate object use when he was holding either the appropriate tool or a neutral object. Tools inappropriate to the requested action produced involuntary performance of the stimulus relevant action. CONCLUSIONS—Tactile stimulation was paramount in the facilitation of motor performance in tool use by this patient with CBD. This outcome suggests that tactile information should be included in models which hypothesise modality specific inputs to the action production system. Significant impairments in spelling and letter production that have not previously been reported in CBD have also been documented. PMID:10449556
Perceptions Concerning Visual Culture Dialogues of Visual Art Pre-Service Teachers
ERIC Educational Resources Information Center
Mamur, Nuray
2012-01-01
The visual art which is commented by the visual art teachers to help processing of the visual culture is important. In this study it is tried to describe the effect of visual culture based on the usual aesthetic experiences to be included in the learning process art education. The action research design, which is a qualitative study, is conducted…
Spatial Working Memory Is Necessary for Actions to Guide Thought
ERIC Educational Resources Information Center
Thomas, Laura E.
2013-01-01
Directed actions can play a causal role in cognition, shaping thought processes. What drives this cross-talk between action and thought? I investigated the hypothesis that representations in spatial working memory mediate interactions between directed actions and problem solving. Participants attempted to solve an insight problem while…
75 FR 28073 - Draft Regulatory Guide: Issuance, Availability
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-19
... NUCLEAR REGULATORY COMMISSION [NRC-2010-0181] Draft Regulatory Guide: Issuance, Availability AGENCY: Nuclear Regulatory Commission. ACTION: Notice of Issuance and Availability of Draft Regulatory Guide, DG-3039, ``Standard Format and Content for Emergency Plans for Fuel Cycle and Materials Facilities.'' FOR FURTHER INFORMATION CONTACT: Kevin M....
76 FR 2726 - Withdrawal of Regulatory Guide 1.154
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-14
... NUCLEAR REGULATORY COMMISSION [NRC-2011-0010] Withdrawal of Regulatory Guide 1.154 AGENCY: Nuclear Regulatory Commission. ACTION: Withdrawal of Regulatory Guide 1.154, ``Format and Content of Plant-Specific Pressurized Thermal Shock Safety Analysis Reports for Pressurized Water Reactors.'' FOR FURTHER INFORMATION CONTACT: Mekonen M. Bayssie,...
76 FR 24539 - Final Regulatory Guide: Issuance, Availability
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-02
... NUCLEAR REGULATORY COMMISSION [NRC-2010-0181] Final Regulatory Guide: Issuance, Availability AGENCY: Nuclear Regulatory Commission. ACTION: Notice of Issuance and Availability of Regulatory Guide (RG) 3.67, ``Standard Format and Content for Emergency Plans for Fuel Cycle and Materials Facilities.'' FOR FURTHER INFORMATION CONTACT: Kevin M. Ramse...
Development of the navigation system for visually impaired.
Harada, Tetsuya; Kaneko, Yuki; Hirahara, Yoshiaki; Yanashima, Kenji; Magatani, Kazushige
2004-01-01
A white cane is a typical support instrument for the visually impaired. They use a white cane for the detection of obstacles while walking. So, the area where they have a mental map, they can walk using white cane without the help of others. However, they cannot walk independently in the unknown area, even if they use a white cane. Because, a white cane is a detecting device for obstacles and not a navigation device for their correct route. Now, we are developing the navigation system for the visually impaired which uses indoor space. In Japan, sometimes colored guide lines to the destination is used for a normal person. These lines are attached on the floor, we can reach the destination, if we walk along one of these line. In our system, a developed new white cane senses one colored guide line, and make notice to an user by vibration. This system recognizes the line of the color stuck on the floor by the optical sensor attached in the white cane. And in order to guide still more smoothly, infrared beacons (optical beacon), which can perform voice guidance, are also used.
An Active System for Visually-Guided Reaching in 3D across Binocular Fixations
2014-01-01
Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity) generated from the egocentric representation of the visual information (image coordinates). In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching). The approach's performance is evaluated through experiments on both simulated and real data. PMID:24672295
Marzoli, Daniele; Menditto, Silvia; Lucafò, Chiara; Tommasi, Luca
2013-08-01
In a previous study, we found that when required to imagine another person performing an action, participants reported a higher correspondence between their own dominant hand and the hand used by the imagined person when the agent was visualized from the back compared to when the agent was visualized from the front. This suggests a greater involvement of motor representations in the back-view perspective, possibly indicating a greater proneness to put oneself in the agent's shoes in such a condition. In order to assess whether bringing to the foreground the right or left hand of an imagined agent can foster the activation of the corresponding motor representations, we required 384 participants to imagine a person-as seen from the right or left side-performing a single manual action and to indicate the hand used by the imagined person during movement execution. The proportion of right- versus left-handed reported actions was higher in the right-view condition than in the left-view condition, suggesting that a lateral vantage point may activate the corresponding hand motor representations, which is in line with previous research indicating a link between the hemispheric specialization of one's own body and the visual representation of others' bodies. Moreover, in agreement with research on hand laterality judgments, the effect of vantage point was stronger for left-handers (who reported a higher proportion of right- than left-handed actions in the right-view condition and a slightly higher proportion of left- than right-handed actions in the left-view condition) than for right-handers (who reported a higher proportion of right- than left-handed actions in both view conditions), indicating that during the mental simulation of others' actions, right-handers rely on sensorimotor processes more than left-handers, while left-handers rely on visual processes more than right-handers.
Lee, Kyoung-Min; Ahn, Kyung-Ha; Keller, Edward L.
2012-01-01
The frontal eye fields (FEF), originally identified as an oculomotor cortex, have also been implicated in perceptual functions, such as constructing a visual saliency map and shifting visual attention. Further dissecting the area’s role in the transformation from visual input to oculomotor command has been difficult because of spatial confounding between stimuli and responses and consequently between intermediate cognitive processes, such as attention shift and saccade preparation. Here we developed two tasks in which the visual stimulus and the saccade response were dissociated in space (the extended memory-guided saccade task), and bottom-up attention shift and saccade target selection were independent (the four-alternative delayed saccade task). Reversible inactivation of the FEF in rhesus monkeys disrupted, as expected, contralateral memory-guided saccades, but visual detection was demonstrated to be intact at the same field. Moreover, saccade behavior was impaired when a bottom-up shift of attention was not a prerequisite for saccade target selection, indicating that the inactivation effect was independent of the previously reported dysfunctions in bottom-up attention control. These findings underscore the motor aspect of the area’s functions, especially in situations where saccades are generated by internal cognitive processes, including visual short-term memory and long-term associative memory. PMID:22761923
Lee, Kyoung-Min; Ahn, Kyung-Ha; Keller, Edward L
2012-01-01
The frontal eye fields (FEF), originally identified as an oculomotor cortex, have also been implicated in perceptual functions, such as constructing a visual saliency map and shifting visual attention. Further dissecting the area's role in the transformation from visual input to oculomotor command has been difficult because of spatial confounding between stimuli and responses and consequently between intermediate cognitive processes, such as attention shift and saccade preparation. Here we developed two tasks in which the visual stimulus and the saccade response were dissociated in space (the extended memory-guided saccade task), and bottom-up attention shift and saccade target selection were independent (the four-alternative delayed saccade task). Reversible inactivation of the FEF in rhesus monkeys disrupted, as expected, contralateral memory-guided saccades, but visual detection was demonstrated to be intact at the same field. Moreover, saccade behavior was impaired when a bottom-up shift of attention was not a prerequisite for saccade target selection, indicating that the inactivation effect was independent of the previously reported dysfunctions in bottom-up attention control. These findings underscore the motor aspect of the area's functions, especially in situations where saccades are generated by internal cognitive processes, including visual short-term memory and long-term associative memory.
DVV: a taxonomy for mixed reality visualization in image guided surgery.
Kersten-Oertel, Marta; Jannin, Pierre; Collins, D Louis
2012-02-01
Mixed reality visualizations are increasingly studied for use in image guided surgery (IGS) systems, yet few mixed reality systems have been introduced for daily use into the operating room (OR). This may be the result of several factors: the systems are developed from a technical perspective, are rarely evaluated in the field, and/or lack consideration of the end user and the constraints of the OR. We introduce the Data, Visualization processing, View (DVV) taxonomy which defines each of the major components required to implement a mixed reality IGS system. We propose that these components be considered and used as validation criteria for introducing a mixed reality IGS system into the OR. A taxonomy of IGS visualization systems is a step toward developing a common language that will help developers and end users discuss and understand the constituents of a mixed reality visualization system, facilitating a greater presence of future systems in the OR. We evaluate the DVV taxonomy based on its goodness of fit and completeness. We demonstrate the utility of the DVV taxonomy by classifying 17 state-of-the-art research papers in the domain of mixed reality visualization IGS systems. Our classification shows that few IGS visualization systems' components have been validated and even fewer are evaluated.
ERIC Educational Resources Information Center
Sly, Carolie; And Others
The purpose of this guide is to provide teachers and other educators with classroom lessons and instructional techniques that build a fundamental understanding of the environment. The guide is aimed at grades kindergarten through sixth and consists of eight instructional units and six action projects. Each unit is organized around a theme and the…
ERIC Educational Resources Information Center
Pistone, Nancy
This handbook is a guide to help educators and administrators with the decisions they face in the design of an arts assessment. The guide is divided into two broad parts: Part 1: "Background for Thoughtful Arts Education Assessment"; and Part 2: "Assessment Design in Action." The guide includes: (1) a brief background on the…
Particles in Action. Study Guide. Unit C2. ZIM-SCI, Zimbabwe Secondary School Science Project.
ERIC Educational Resources Information Center
Stocklmayer, Sue
The Zimbabwe Secondary School Science Project (ZIM-SCI) developed student study guides, corresponding teaching guides, and science kits for a low-cost science course which could be taught during the first 2 years of secondary school without the aid of qualified teachers and conventional laboratories. This ZIM-SCI study guide is a four-part unit…
Particles in Action. Teacher's Guide. Unit C2. ZIM-SCI, Zimbabwe Secondary School Science Project.
ERIC Educational Resources Information Center
Stocklmayer, Sue
The Zimbabwe Secondary School Science Project (ZIM-SCI) developed student study guides, corresponding teaching guides, and science kits for a low-cost science course which could be taught during the first 2 years of secondary school without the aid of qualified teachers and conventional laboratories. This teaching guide, designed to be read in…
Cheng, Po-Hsun
2016-01-01
Several assistive technologies are available to help visually impaired individuals avoid obstructions while walking. Unfortunately, white canes and medical walkers are unable to detect obstacles on the road or react to encumbrances located above the waist. In this study, I adopted the cyber-physical system approach in the development of a cap-connected device to compensate for gaps in detection associated with conventional aids for the visually impaired. I developed a verisimilar, experimental route involving the participation of seven individuals with visual impairment, including straight sections, left turns, right turns, curves, and suspended objects. My aim was to facilitate the collection of information required for the practical use of the device. My findings demonstrate the feasibility of the proposed guiding device in alerting walkers to the presence of some kinds of obstacles from the small number of subjects. That is, it shows promise for future work and research with the proposed device. My findings provide a valuable reference for the further improvement of these devices as well as the establishment of experiments involving the visually impaired.
NASA Astrophysics Data System (ADS)
Rogowitz, Bernice E.; Matasci, Naim
2011-03-01
The explosion of online scientific data from experiments, simulations, and observations has given rise to an avalanche of algorithmic, visualization and imaging methods. There has also been enormous growth in the introduction of tools that provide interactive interfaces for exploring these data dynamically. Most systems, however, do not support the realtime exploration of patterns and relationships across tools and do not provide guidance on which colors, colormaps or visual metaphors will be most effective. In this paper, we introduce a general architecture for sharing metadata between applications and a "Metadata Mapper" component that allows the analyst to decide how metadata from one component should be represented in another, guided by perceptual rules. This system is designed to support "brushing [1]," in which highlighting a region of interest in one application automatically highlights corresponding values in another, allowing the scientist to develop insights from multiple sources. Our work builds on the component-based iPlant Cyberinfrastructure [2] and provides a general approach to supporting interactive, exploration across independent visualization and visual analysis components.
Sensory Agreement Guides Kinetic Energy Optimization of Arm Movements during Object Manipulation.
Farshchiansadegh, Ali; Melendez-Calderon, Alejandro; Ranganathan, Rajiv; Murphey, Todd D; Mussa-Ivaldi, Ferdinando A
2016-04-01
The laws of physics establish the energetic efficiency of our movements. In some cases, like locomotion, the mechanics of the body dominate in determining the energetically optimal course of action. In other tasks, such as manipulation, energetic costs depend critically upon the variable properties of objects in the environment. Can the brain identify and follow energy-optimal motions when these motions require moving along unfamiliar trajectories? What feedback information is required for such optimal behavior to occur? To answer these questions, we asked participants to move their dominant hand between different positions while holding a virtual mechanical system with complex dynamics (a planar double pendulum). In this task, trajectories of minimum kinetic energy were along curvilinear paths. Our findings demonstrate that participants were capable of finding the energy-optimal paths, but only when provided with veridical visual and haptic information pertaining to the object, lacking which the trajectories were executed along rectilinear paths.
Overgaard, Morten; Mogensen, Jesper
2014-01-01
This article proposes a new model to interpret seemingly conflicting evidence concerning the correlation of consciousness and neural processes. Based on an analysis of research of blindsight and subliminal perception, the reorganization of elementary functions and consciousness framework suggests that mental representations consist of functions at several different levels of analysis, including truly localized perceptual elementary functions and perceptual algorithmic modules, which are interconnections of the elementary functions. We suggest that conscious content relates to the ‘top level’ of analysis in a ‘situational algorithmic strategy’ that reflects the general state of an individual. We argue that conscious experience is intrinsically related to representations that are available to guide behaviour. From this perspective, we find that blindsight and subliminal perception can be explained partly by too coarse-grained methodology, and partly by top-down enhancing of representations that normally would not be relevant to action. PMID:24639581
Handbook for Teachers of the Visually Handicapped.
ERIC Educational Resources Information Center
Napier, Grace D.; Weishahn, Mel W.
Designed to aid the inexperienced teacher of the visually handicapped, the handbook examines aspects of program objectives, content, philosophy, methods, eligibility, and placement procedures. The guide to material selection provides specific information on the acquisition of Braille materials, large type materials, recorded materials, direct…
Pilot/vehicle model analysis of visually guided flight
NASA Technical Reports Server (NTRS)
Zacharias, Greg L.
1991-01-01
Information is given in graphical and outline form on a pilot/vehicle model description, control of altitude with simple terrain clues, simulated flight with visual scene delays, model-based in-cockpit display design, and some thoughts on the role of pilot/vehicle modeling.