Sample records for visual object-oriented iterative

  1. VIMOS Instrument Control Software Design: an Object Oriented Approach

    NASA Astrophysics Data System (ADS)

    Brau-Nogué, Sylvie; Lucuix, Christian

    2002-12-01

    The Franco-Italian VIMOS instrument is a VIsible imaging Multi-Object Spectrograph with outstanding multiplex capabilities, allowing to take spectra of more than 800 objects simultaneously, or integral field spectroscopy mode in a 54x54 arcsec area. VIMOS is being installed at the Nasmyth focus of the third Unit Telescope of the European Southern Observatory Very Large Telescope (VLT) at Mount Paranal in Chile. This paper will describe the analysis, the design and the implementation of the VIMOS Instrument Control System, using UML notation. Our Control group followed an Object Oriented software process while keeping in mind the ESO VLT standard control concepts. At ESO VLT a complete software library is available. Rather than applying waterfall lifecycle, ICS project used iterative development, a lifecycle consisting of several iterations. Each iteration consisted in : capture and evaluate the requirements, visual modeling for analysis and design, implementation, test, and deployment. Depending of the project phases, iterations focused more or less on specific activity. The result is an object model (the design model), including use-case realizations. An implementation view and a deployment view complement this product. An extract of VIMOS ICS UML model will be presented and some implementation, integration and test issues will be discussed.

  2. A Biologically Plausible Transform for Visual Recognition that is Invariant to Translation, Scale, and Rotation.

    PubMed

    Sountsov, Pavel; Santucci, David M; Lisman, John E

    2011-01-01

    Visual object recognition occurs easily despite differences in position, size, and rotation of the object, but the neural mechanisms responsible for this invariance are not known. We have found a set of transforms that achieve invariance in a neurally plausible way. We find that a transform based on local spatial frequency analysis of oriented segments and on logarithmic mapping, when applied twice in an iterative fashion, produces an output image that is unique to the object and that remains constant as the input image is shifted, scaled, or rotated.

  3. A Biologically Plausible Transform for Visual Recognition that is Invariant to Translation, Scale, and Rotation

    PubMed Central

    Sountsov, Pavel; Santucci, David M.; Lisman, John E.

    2011-01-01

    Visual object recognition occurs easily despite differences in position, size, and rotation of the object, but the neural mechanisms responsible for this invariance are not known. We have found a set of transforms that achieve invariance in a neurally plausible way. We find that a transform based on local spatial frequency analysis of oriented segments and on logarithmic mapping, when applied twice in an iterative fashion, produces an output image that is unique to the object and that remains constant as the input image is shifted, scaled, or rotated. PMID:22125522

  4. Accelerated iterative beam angle selection in IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bangert, Mark, E-mail: m.bangert@dkfz.de; Unkelbach, Jan

    2016-03-15

    Purpose: Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy (IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n − 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based onmore » surrogates for the FMO objective function value. Methods: We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation. Results: Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could show that optimized beam ensembles using only a few noncoplanar beam orientations often approach the plan quality of fully noncoplanar ensembles. Conclusions: We conclude that iterative BAS in combination with objective function surrogates can be a viable option to implement automated BAS at clinically acceptable computation times.« less

  5. Accelerated iterative beam angle selection in IMRT.

    PubMed

    Bangert, Mark; Unkelbach, Jan

    2016-03-01

    Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy (IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n - 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based on surrogates for the FMO objective function value. We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation. Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could show that optimized beam ensembles using only a few noncoplanar beam orientations often approach the plan quality of fully noncoplanar ensembles. We conclude that iterative BAS in combination with objective function surrogates can be a viable option to implement automated BAS at clinically acceptable computation times.

  6. Hierarchical image feature extraction by an irregular pyramid of polygonal partitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skurikhin, Alexei N

    2008-01-01

    We present an algorithmic framework for hierarchical image segmentation and feature extraction. We build a successive fine-to-coarse hierarchy of irregular polygonal partitions of the original image. This multiscale hierarchy forms the basis for object-oriented image analysis. The framework incorporates the Gestalt principles of visual perception, such as proximity and closure, and exploits spectral and textural similarities of polygonal partitions, while iteratively grouping them until dissimilarity criteria are exceeded. Seed polygons are built upon a triangular mesh composed of irregular sized triangles, whose spatial arrangement is adapted to the image content. This is achieved by building the triangular mesh on themore » top of detected spectral discontinuities (such as edges), which form a network of constraints for the Delaunay triangulation. The image is then represented as a spatial network in the form of a graph with vertices corresponding to the polygonal partitions and edges reflecting their relations. The iterative agglomeration of partitions into object-oriented segments is formulated as Minimum Spanning Tree (MST) construction. An important characteristic of the approach is that the agglomeration of polygonal partitions is constrained by the detected edges; thus the shapes of agglomerated partitions are more likely to correspond to the outlines of real-world objects. The constructed partitions and their spatial relations are characterized using spectral, textural and structural features based on proximity graphs. The framework allows searching for object-oriented features of interest across multiple levels of details of the built hierarchy and can be generalized to the multi-criteria MST to account for multiple criteria important for an application.« less

  7. Hybrid region merging method for segmentation of high-resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Xueliang; Xiao, Pengfeng; Feng, Xuezhi; Wang, Jiangeng; Wang, Zuo

    2014-12-01

    Image segmentation remains a challenging problem for object-based image analysis. In this paper, a hybrid region merging (HRM) method is proposed to segment high-resolution remote sensing images. HRM integrates the advantages of global-oriented and local-oriented region merging strategies into a unified framework. The globally most-similar pair of regions is used to determine the starting point of a growing region, which provides an elegant way to avoid the problem of starting point assignment and to enhance the optimization ability for local-oriented region merging. During the region growing procedure, the merging iterations are constrained within the local vicinity, so that the segmentation is accelerated and can reflect the local context, as compared with the global-oriented method. A set of high-resolution remote sensing images is used to test the effectiveness of the HRM method, and three region-based remote sensing image segmentation methods are adopted for comparison, including the hierarchical stepwise optimization (HSWO) method, the local-mutual best region merging (LMM) method, and the multiresolution segmentation (MRS) method embedded in eCognition Developer software. Both the supervised evaluation and visual assessment show that HRM performs better than HSWO and LMM by combining both their advantages. The segmentation results of HRM and MRS are visually comparable, but HRM can describe objects as single regions better than MRS, and the supervised and unsupervised evaluation results further prove the superiority of HRM.

  8. A conditioned visual orientation requires the ellipsoid body in Drosophila

    PubMed Central

    Guo, Chao; Du, Yifei; Yuan, Deliang; Li, Meixia; Gong, Haiyun; Gong, Zhefeng

    2015-01-01

    Orientation, the spatial organization of animal behavior, is an essential faculty of animals. Bacteria and lower animals such as insects exhibit taxis, innate orientation behavior, directly toward or away from a directional cue. Organisms can also orient themselves at a specific angle relative to the cues. In this study, using Drosophila as a model system, we established a visual orientation conditioning paradigm based on a flight simulator in which a stationary flying fly could control the rotation of a visual object. By coupling aversive heat shocks to a fly's orientation toward one side of the visual object, we found that the fly could be conditioned to orientate toward the left or right side of the frontal visual object and retain this conditioned visual orientation. The lower and upper visual fields have different roles in conditioned visual orientation. Transfer experiments showed that conditioned visual orientation could generalize between visual targets of different sizes, compactness, or vertical positions, but not of contour orientation. Rut—Type I adenylyl cyclase and Dnc—phosphodiesterase were dispensable for visual orientation conditioning. Normal activity and scb signaling in R3/R4d neurons of the ellipsoid body were required for visual orientation conditioning. Our studies established a visual orientation conditioning paradigm and examined the behavioral properties and neural circuitry of visual orientation, an important component of the insect's spatial navigation. PMID:25512578

  9. Visual Search for Object Orientation Can Be Modulated by Canonical Orientation

    ERIC Educational Resources Information Center

    Ballaz, Cecile; Boutsen, Luc; Peyrin, Carole; Humphreys, Glyn W.; Marendaz, Christian

    2005-01-01

    The authors studied the influence of canonical orientation on visual search for object orientation. Displays consisted of pictures of animals whose axis of elongation was either vertical or tilted in their canonical orientation. Target orientation could be either congruent or incongruent with the object's canonical orientation. In Experiment 1,…

  10. An object-oriented simulator for 3D digital breast tomosynthesis imaging system.

    PubMed

    Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values.

  11. An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System

    PubMed Central

    Cengiz, Kubra

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values. PMID:24371468

  12. Determining the orientation of depth-rotated familiar objects.

    PubMed

    Niimi, Ryosuke; Yokosawa, Kazuhiko

    2008-02-01

    How does the human visual system determine the depth-orientation of familiar objects? We examined reaction times and errors in the detection of 15 degrees differences in the depth orientations of two simultaneously presented familiar objects, which were the same objects (Experiment 1) or different objects (Experiment 2). Detection of orientation differences was best for 0 degrees (front) and 180 degrees (back), while 45 degrees and 135 degrees yielded poorer results, and 90 degrees (side) showed intermediate results, suggesting that the visual system is tuned for front, side and back orientations. We further found that those advantages are due to orientation-specific features such as horizontal linear contours and symmetry, since the 90 degrees advantage was absent for objects with curvilinear contours, and asymmetric object diminished the 0 degrees and 180 degrees advantages. We conclude that the efficiency of visually determining object orientation is highly orientation-dependent, and object orientation may be perceived in favor of front-back axes.

  13. Relationship between visual binding, reentry and awareness.

    PubMed

    Koivisto, Mika; Silvanto, Juha

    2011-12-01

    Visual feature binding has been suggested to depend on reentrant processing. We addressed the relationship between binding, reentry, and visual awareness by asking the participants to discriminate the color and orientation of a colored bar (presented either alone or simultaneously with a white distractor bar) and to report their phenomenal awareness of the target features. The success of reentry was manipulated with object substitution masking and backward masking. The results showed that late reentrant processes are necessary for successful binding but not for phenomenal awareness of the bound features. Binding errors were accompanied by phenomenal awareness of the misbound feature conjunctions, demonstrating that they were experienced as real properties of the stimuli (i.e., illusory conjunctions). Our results suggest that early preattentive binding and local recurrent processing enable features to reach phenomenal awareness, while later attention-related reentrant iterations modulate the way in which the features are bound and experienced in awareness. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Perceived Average Orientation Reflects Effective Gist of the Surface.

    PubMed

    Cha, Oakyoon; Chong, Sang Chul

    2018-03-01

    The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.

  15. Multi-point objective-oriented sequential sampling strategy for constrained robust design

    NASA Astrophysics Data System (ADS)

    Zhu, Ping; Zhang, Siliang; Chen, Wei

    2015-03-01

    Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.

  16. Experience Report: Visual Programming in the Real World

    NASA Technical Reports Server (NTRS)

    Baroth, E.; Hartsough, C

    1994-01-01

    This paper reports direct experience with two commercial, widely used visual programming environments. While neither of these systems is object oriented, the tools have transformed the development process and indicate a direction for visual object oriented tools to proceed.

  17. Perl Modules for Constructing Iterators

    NASA Technical Reports Server (NTRS)

    Tilmes, Curt

    2009-01-01

    The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.

  18. On Violence against Objects: A Visual Chord

    ERIC Educational Resources Information Center

    Staley, David J.

    2010-01-01

    "On Violence Against Objects" is best viewed over several minutes; allow the images to go through several iterations in order to see as many juxtapositions as possible.The visual argument of the work emerges as the viewer perceives analogies between the various images.

  19. Possible functions of contextual modulations and receptive field nonlinearities: pop-out and texture segmentation

    PubMed Central

    Schmid, Anita M.; Victor, Jonathan D.

    2014-01-01

    When analyzing a visual image, the brain has to achieve several goals quickly. One crucial goal is to rapidly detect parts of the visual scene that might be behaviorally relevant, while another one is to segment the image into objects, to enable an internal representation of the world. Both of these processes can be driven by local variations in any of several image attributes such as luminance, color, and texture. Here, focusing on texture defined by local orientation, we propose that the two processes are mediated by separate mechanisms that function in parallel. More specifically, differences in orientation can cause an object to “pop out” and attract visual attention, if its orientation differs from that of the surrounding objects. Differences in orientation can also signal a boundary between objects and therefore provide useful information for image segmentation. We propose that contextual response modulations in primary visual cortex (V1) are responsible for orientation pop-out, while a different kind of receptive field nonlinearity in secondary visual cortex (V2) is responsible for orientation-based texture segmentation. We review a recent experiment that led us to put forward this hypothesis along with other research literature relevant to this notion. PMID:25064441

  20. Serial dependence promotes object stability during occlusion

    PubMed Central

    Liberman, Alina; Zhang, Kathy; Whitney, David

    2016-01-01

    Object identities somehow appear stable and continuous over time despite eye movements, disruptions in visibility, and constantly changing visual input. Recent results have demonstrated that the perception of orientation, numerosity, and facial identity is systematically biased (i.e., pulled) toward visual input from the recent past. The spatial region over which current orientations or face identities are pulled by previous orientations or identities, respectively, is known as the continuity field, which is temporally tuned over the past several seconds (Fischer & Whitney, 2014). This perceptual pull could contribute to the visual stability of objects over short time periods, but does it also address how perceptual stability occurs during visual discontinuities? Here, we tested whether the continuity field helps maintain perceived object identity during occlusion. Specifically, we found that the perception of an oriented Gabor that emerged from behind an occluder was significantly pulled toward the random (and unrelated) orientation of the Gabor that was seen entering the occluder. Importantly, this serial dependence was stronger for predictable, continuously moving trajectories, compared to unpredictable ones or static displacements. This result suggests that our visual system takes advantage of expectations about a stable world, helping to maintain perceived object continuity despite interrupted visibility. PMID:28006066

  1. Solid shape discrimination from vision and haptics: natural objects (Capsicum annuum) and Gibson's "feelies".

    PubMed

    Norman, J Farley; Phillips, Flip; Holmin, Jessica S; Norman, Hideko F; Beers, Amanda M; Boswell, Alexandria M; Cheeseman, Jacob R; Stethen, Angela G; Ronning, Cecilia

    2012-10-01

    A set of three experiments evaluated 96 participants' ability to visually and haptically discriminate solid object shape. In the past, some researchers have found haptic shape discrimination to be substantially inferior to visual shape discrimination, while other researchers have found haptics and vision to be essentially equivalent. A primary goal of the present study was to understand these discrepant past findings and to determine the true capabilities of the haptic system. All experiments used the same task (same vs. different shape discrimination) and stimulus objects (James Gibson's "feelies" and a set of naturally shaped objects--bell peppers). However, the methodology varied across experiments. Experiment 1 used random 3-dimensional (3-D) orientations of the stimulus objects, and the conditions were full-cue (active manipulation of objects and rotation of the visual objects in depth). Experiment 2 restricted the 3-D orientations of the stimulus objects and limited the haptic and visual information available to the participants. Experiment 3 compared restricted and full-cue conditions using random 3-D orientations. We replicated both previous findings in the current study. When we restricted visual and haptic information (and placed the stimulus objects in the same orientation on every trial), the participants' visual performance was superior to that obtained for haptics (replicating the earlier findings of Davidson et al. in Percept Psychophys 15(3):539-543, 1974). When the circumstances resembled those of ordinary life (e.g., participants able to actively manipulate objects and see them from a variety of perspectives), we found no significant difference between visual and haptic solid shape discrimination.

  2. Spontaneous in-flight accommodation of hand orientation to unseen grasp targets: A case of action blindsight.

    PubMed

    Prentiss, Emily K; Schneider, Colleen L; Williams, Zoë R; Sahin, Bogachan; Mahon, Bradford Z

    2018-03-15

    The division of labour between the dorsal and ventral visual pathways is well established. The ventral stream supports object identification, while the dorsal stream supports online processing of visual information in the service of visually guided actions. Here, we report a case of an individual with a right inferior quadrantanopia who exhibited accurate spontaneous rotation of his wrist when grasping a target object in his blind visual field. His accurate wrist orientation was observed despite the fact that he exhibited no sensitivity to the orientation of the handle in a perceptual matching task. These findings indicate that non-geniculostriate visual pathways process basic volumetric information relevant to grasping, and reinforce the observation that phenomenal awareness is not necessary for an object's volumetric properties to influence visuomotor performance.

  3. Perceived object stability depends on multisensory estimates of gravity.

    PubMed

    Barnett-Cowan, Michael; Fleming, Roland W; Singh, Manish; Bülthoff, Heinrich H

    2011-04-27

    How does the brain estimate object stability? Objects fall over when the gravity-projected centre-of-mass lies outside the point or area of support. To estimate an object's stability visually, the brain must integrate information across the shape and compare its orientation to gravity. When observers lie on their sides, gravity is perceived as tilted toward body orientation, consistent with a representation of gravity derived from multisensory information. We exploited this to test whether vestibular and kinesthetic information affect this visual task or whether the brain estimates object stability solely from visual information. In three body orientations, participants viewed images of objects close to a table edge. We measured the critical angle at which each object appeared equally likely to fall over or right itself. Perceived gravity was measured using the subjective visual vertical. The results show that the perceived critical angle was significantly biased in the same direction as the subjective visual vertical (i.e., towards the multisensory estimate of gravity). Our results rule out a general explanation that the brain depends solely on visual heuristics and assumptions about object stability. Instead, they suggest that multisensory estimates of gravity govern the perceived stability of objects, resulting in objects appearing more stable than they are when the head is tilted in the same direction in which they fall.

  4. Not All Attention Orienting is Created Equal: Recognition Memory is Enhanced When Attention Orienting Involves Distractor Suppression

    PubMed Central

    Markant, Julie; Worden, Michael S.; Amso, Dima

    2015-01-01

    Learning through visual exploration often requires orienting of attention to meaningful information in a cluttered world. Previous work has shown that attention modulates visual cortex activity, with enhanced activity for attended targets and suppressed activity for competing inputs, thus enhancing the visual experience. Here we examined the idea that learning may be engaged differentially with variations in attention orienting mechanisms that drive driving eye movements during visual search and exploration. We hypothesized that attention orienting mechanisms that engaged suppression of a previously attended location will boost memory encoding of the currently attended target objects to a greater extent than those that involve target enhancement alone To test this hypothesis we capitalized on the classic spatial cueing task and the inhibition of return (IOR) mechanism (Posner, Rafal, & Choate, 1985; Posner, 1980) to demonstrate that object images encoded in the context of concurrent suppression at a previously attended location were encoded more effectively and remembered better than those encoded without concurrent suppression. Furthermore, fMRI analyses revealed that this memory benefit was driven by attention modulation of visual cortex activity, as increased suppression of the previously attended location in visual cortex during target object encoding predicted better subsequent recognition memory performance. These results suggest that not all attention orienting impacts learning and memory equally. PMID:25701278

  5. Object-oriented technologies in a multi-mission data system

    NASA Technical Reports Server (NTRS)

    Murphy, Susan C.; Miller, Kevin J.; Louie, John J.

    1993-01-01

    The Operations Engineering Laboratory (OEL) at JPL is developing new technologies that can provide more efficient and productive ways of doing business in flight operations. Over the past three years, we have worked closely with the Multi-Mission Control Team to develop automation tools, providing technology transfer into operations and resulting in substantial cost savings and error reduction. The OEL development philosophy is characterized by object-oriented design, extensive reusability of code, and an iterative development model with active participation of the end users. Through our work, the benefits of object-oriented design became apparent for use in mission control data systems. Object-oriented technologies and how they can be used in a mission control center to improve efficiency and productivity are explained. The current research and development efforts in the JPL Operations Engineering Laboratory are also discussed to architect and prototype a new paradigm for mission control operations based on object-oriented concepts.

  6. Orientation priming of grasping decision for drawings of objects and blocks, and words.

    PubMed

    Chainay, Hanna; Naouri, Lucie; Pavec, Alice

    2011-05-01

    This study tested the influence of orientation priming on grasping decisions. Two groups of 20 healthy participants had to select a preferred grasping orientation (horizontal, vertical) based on drawings of everyday objects, geometric blocks or object names. Three priming conditions were used: congruent, incongruent and neutral. The facilitating effects of priming were observed in the grasping decision task for drawings of objects and blocks but not object names. The visual information about congruent orientation in the prime quickened participants' responses but had no effect on response accuracy. The results are discussed in the context of the hypothesis that an object automatically potentiates grasping associated with it, and that the on-line visual information is necessary for grasping potentiation to occur. The possibility that the most frequent orientation of familiar objects might be included in object-action representation is also discussed.

  7. Storage of features, conjunctions and objects in visual working memory.

    PubMed

    Vogel, E K; Woodman, G F; Luck, S J

    2001-02-01

    Working memory can be divided into separate subsystems for verbal and visual information. Although the verbal system has been well characterized, the storage capacity of visual working memory has not yet been established for simple features or for conjunctions of features. The authors demonstrate that it is possible to retain information about only 3-4 colors or orientations in visual working memory at one time. Observers are also able to retain both the color and the orientation of 3-4 objects, indicating that visual working memory stores integrated objects rather than individual features. Indeed, objects defined by a conjunction of four features can be retained in working memory just as well as single-feature objects, allowing many individual features to be retained when distributed across a small number of objects. Thus, the capacity of visual working memory must be understood in terms of integrated objects rather than individual features.

  8. BlueJ Visual Debugger for Learning the Execution of Object-Oriented Programs?

    ERIC Educational Resources Information Center

    Bennedsen, Jens; Schulte, Carsten

    2010-01-01

    This article reports on an experiment undertaken in order to evaluate the effect of a program visualization tool for helping students to better understand the dynamics of object-oriented programs. The concrete tool used was BlueJ's debugger and object inspector. The study was done as a control-group experiment in an introductory programming…

  9. The effects of short-term and long-term learning on the responses of lateral intraparietal neurons to visually presented objects

    PubMed Central

    Sigurdardottir, Heida M.; Sheinberg, David L.

    2015-01-01

    The lateral intraparietal area (LIP) of the dorsal visual stream is thought to play an important role in visually directed orienting, or the guidance of where to look and pay attention. LIP can also respond selectively to differently shaped objects. We sought to understand how and to what extent short-term and long-term experience with visual orienting can determine the nature of responses of LIP neurons to objects of different shapes. We taught monkeys to arbitrarily associate centrally presented objects of various shapes with orienting either toward or away from a preferred peripheral spatial location of a neuron. For some objects the training lasted for less than a single day, while for other objects the training lasted for several months. We found that neural responses to visual objects are affected both by such short-term and long-term experience, but that the length of the learning period determines exactly how this neural plasticity manifests itself. Short-term learning over the course of a single training session affects neural responses to objects, but these effects are only seen relatively late after visual onset; at this time, the neural responses to newly learned objects start to resemble those of familiar over-learned objects that share their meaning or arbitrary association. Long-term learning, on the other hand, affects the earliest and apparently bottom-up responses to visual objects. These responses tend to be greater for objects that have repeatedly been associated with looking toward, rather than away from, LIP neurons’ preferred spatial locations. Responses to objects can nonetheless be distinct even though the objects have both been similarly acted on in the past and will lead to the same orienting behavior in the future. Our results therefore also indicate that a complete experience-driven override of LIP object responses is difficult or impossible. PMID:25633647

  10. Multiprocessor smalltalk: Implementation, performance, and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pallas, J.I.

    1990-01-01

    Multiprocessor Smalltalk demonstrates the value of object-oriented programming on a multiprocessor. Its implementation and analysis shed light on three areas: concurrent programming in an object oriented language without special extensions, implementation techniques for adapting to multiprocessors, and performance factors in the resulting system. Adding parallelism to Smalltalk code is easy, because programs already use control abstractions like iterators. Smalltalk's basic control and concurrency primitives (lambda expressions, processes and semaphores) can be used to build parallel control abstractions, including parallel iterators, parallel objects, atomic objects, and futures. Language extensions for concurrency are not required. This implementation demonstrates that it is possiblemore » to build an efficient parallel object-oriented programming system and illustrates techniques for doing so. Three modification tools-serialization, replication, and reorganization-adapted the Berkeley Smalltalk interpreter to the Firefly multiprocessor. Multiprocessor Smalltalk's performance shows that the combination of multiprocessing and object-oriented programming can be effective: speedups (relative to the original serial version) exceed 2.0 for five processors on all the benchmarks; the median efficiency is 48%. Analysis shows both where performance is lost and how to improve and generalize the experimental results. Changes in the interpreter to support concurrency add at most 12% overhead; better access to per-process variables could eliminate much of that. Changes in the user code to express concurrency add as much as 70% overhead; this overhead could be reduced to 54% if blocks (lambda expressions) were reentrant. Performance is also lost when the program cannot keep all five processors busy.« less

  11. An iterated local search algorithm for the team orienteering problem with variable profits

    NASA Astrophysics Data System (ADS)

    Gunawan, Aldy; Ng, Kien Ming; Kendall, Graham; Lai, Junhan

    2018-07-01

    The orienteering problem (OP) is a routing problem that has numerous applications in various domains such as logistics and tourism. The objective is to determine a subset of vertices to visit for a vehicle so that the total collected score is maximized and a given time budget is not exceeded. The extensive application of the OP has led to many different variants, including the team orienteering problem (TOP) and the team orienteering problem with time windows. The TOP extends the OP by considering multiple vehicles. In this article, the team orienteering problem with variable profits (TOPVP) is studied. The main characteristic of the TOPVP is that the amount of score collected from a visited vertex depends on the duration of stay on that vertex. A mathematical programming model for the TOPVP is first presented and an algorithm based on iterated local search (ILS) that is able to solve modified benchmark instances is then proposed. It is concluded that ILS produces solutions which are comparable to those obtained by the commercial solver CPLEX for smaller instances. For the larger instances, ILS obtains good-quality solutions that have significantly better objective value than those found by CPLEX under reasonable computational times.

  12. Not all attention orienting is created equal: recognition memory is enhanced when attention orienting involves distractor suppression.

    PubMed

    Markant, Julie; Worden, Michael S; Amso, Dima

    2015-04-01

    Learning through visual exploration often requires orienting of attention to meaningful information in a cluttered world. Previous work has shown that attention modulates visual cortex activity, with enhanced activity for attended targets and suppressed activity for competing inputs, thus enhancing the visual experience. Here we examined the idea that learning may be engaged differentially with variations in attention orienting mechanisms that drive eye movements during visual search and exploration. We hypothesized that attention orienting mechanisms that engaged suppression of a previously attended location would boost memory encoding of the currently attended target objects to a greater extent than those that involve target enhancement alone. To test this hypothesis we capitalized on the classic spatial cueing task and the inhibition of return (IOR) mechanism (Posner, 1980; Posner, Rafal, & Choate, 1985) to demonstrate that object images encoded in the context of concurrent suppression at a previously attended location were encoded more effectively and remembered better than those encoded without concurrent suppression. Furthermore, fMRI analyses revealed that this memory benefit was driven by attention modulation of visual cortex activity, as increased suppression of the previously attended location in visual cortex during target object encoding predicted better subsequent recognition memory performance. These results suggest that not all attention orienting impacts learning and memory equally. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Model Cortical Association Fields Account for the Time Course and Dependence on Target Complexity of Human Contour Perception

    PubMed Central

    Gintautas, Vadas; Ham, Michael I.; Kunsberg, Benjamin; Barr, Shawn; Brumby, Steven P.; Rasmussen, Craig; George, John S.; Nemenman, Ilya; Bettencourt, Luís M. A.; Kenyon, Garret T.

    2011-01-01

    Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas) distributed among groups of randomly rotated fragments (clutter). The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms), followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas. PMID:21998562

  14. Enhancing Problem-Solving Capabilities Using Object-Oriented Programming Language

    ERIC Educational Resources Information Center

    Unuakhalu, Mike F.

    2009-01-01

    This study integrated object-oriented programming instruction with transfer training activities in everyday tasks, which might provide a mechanism that can be used for efficient problem solving. Specifically, a Visual BASIC embedded with everyday tasks group was compared to another group exposed to Visual BASIC instruction only. Subjects were 40…

  15. Learning to See the Infinite: Measuring Visual Literacy Skills in a 1st-Year Seminar Course

    ERIC Educational Resources Information Center

    Palmer, Michael S.; Matthews, Tatiana

    2015-01-01

    Visual literacy was a stated learning objective for the fall 2009 iteration of a first-year seminar course. To help students develop visual literacy skills, they received formal instruction throughout the semester and completed a series of carefully designed learning activities. The effects of these interventions were measured using a one-group…

  16. Overt attention toward oriented objects in free-viewing barn owls.

    PubMed

    Harmening, Wolf Maximilian; Orlowski, Julius; Ben-Shahar, Ohad; Wagner, Hermann

    2011-05-17

    Visual saliency based on orientation contrast is a perceptual product attributed to the functional organization of the mammalian brain. We examined this visual phenomenon in barn owls by mounting a wireless video microcamera on the owls' heads and confronting them with visual scenes that contained one differently oriented target among similarly oriented distracters. Without being confined by any particular task, the owls looked significantly longer, more often, and earlier at the target, thus exhibiting visual search strategies so far demonstrated in similar conditions only in primates. Given the considerable differences in phylogeny and the structure of visual pathways between owls and humans, these findings suggest that orientation saliency has computational optimality in a wide variety of ecological contexts, and thus constitutes a universal building block for efficient visual information processing in general.

  17. Three-Dimensional Registration for Handheld Profiling Systems Based on Multiple Shot Structured Light

    PubMed Central

    Ayaz, Shirazi Muhammad; Kim, Min Young

    2018-01-01

    In this article, a multi-view registration approach for the 3D handheld profiling system based on the multiple shot structured light technique is proposed. The multi-view registration approach is categorized into coarse registration and point cloud refinement using the iterative closest point (ICP) algorithm. Coarse registration of multiple point clouds was performed using relative orientation and translation parameters estimated via homography-based visual navigation. The proposed system was evaluated using an artificial human skull and a paper box object. For the quantitative evaluation of the accuracy of a single 3D scan, a paper box was reconstructed, and the mean errors in its height and breadth were found to be 9.4 μm and 23 μm, respectively. A comprehensive quantitative evaluation and comparison of proposed algorithm was performed with other variants of ICP. The root mean square error for the ICP algorithm to register a pair of point clouds of the skull object was also found to be less than 1 mm. PMID:29642552

  18. Iterative Assessment of Statistically-Oriented and Standard Algorithms for Determining Muscle Onset with Intramuscular Electromyography.

    PubMed

    Tenan, Matthew S; Tweedell, Andrew J; Haynes, Courtney A

    2017-12-01

    The onset of muscle activity, as measured by electromyography (EMG), is a commonly applied metric in biomechanics. Intramuscular EMG is often used to examine deep musculature and there are currently no studies examining the effectiveness of algorithms for intramuscular EMG onset. The present study examines standard surface EMG onset algorithms (linear envelope, Teager-Kaiser Energy Operator, and sample entropy) and novel algorithms (time series mean-variance analysis, sequential/batch processing with parametric and nonparametric methods, and Bayesian changepoint analysis). Thirteen male and 5 female subjects had intramuscular EMG collected during isolated biceps brachii and vastus lateralis contractions, resulting in 103 trials. EMG onset was visually determined twice by 3 blinded reviewers. Since the reliability of visual onset was high (ICC (1,1) : 0.92), the mean of the 6 visual assessments was contrasted with the algorithmic approaches. Poorly performing algorithms were stepwise eliminated via (1) root mean square error analysis, (2) algorithm failure to identify onset/premature onset, (3) linear regression analysis, and (4) Bland-Altman plots. The top performing algorithms were all based on Bayesian changepoint analysis of rectified EMG and were statistically indistinguishable from visual analysis. Bayesian changepoint analysis has the potential to produce more reliable, accurate, and objective intramuscular EMG onset results than standard methodologies.

  19. Bayesian integration of position and orientation cues in perception of biological and non-biological forms.

    PubMed

    Thurman, Steven M; Lu, Hongjing

    2014-01-01

    Visual form analysis is fundamental to shape perception and likely plays a central role in perception of more complex dynamic shapes, such as moving objects or biological motion. Two primary form-based cues serve to represent the overall shape of an object: the spatial position and the orientation of locations along the boundary of the object. However, it is unclear how the visual system integrates these two sources of information in dynamic form analysis, and in particular how the brain resolves ambiguities due to sensory uncertainty and/or cue conflict. In the current study, we created animations of sparsely-sampled dynamic objects (human walkers or rotating squares) comprised of oriented Gabor patches in which orientation could either coincide or conflict with information provided by position cues. When the cues were incongruent, we found a characteristic trade-off between position and orientation information whereby position cues increasingly dominated perception as the relative uncertainty of orientation increased and vice versa. Furthermore, we found no evidence for differences in the visual processing of biological and non-biological objects, casting doubt on the claim that biological motion may be specialized in the human brain, at least in specific terms of form analysis. To explain these behavioral results quantitatively, we adopt a probabilistic template-matching model that uses Bayesian inference within local modules to estimate object shape separately from either spatial position or orientation signals. The outputs of the two modules are integrated with weights that reflect individual estimates of subjective cue reliability, and integrated over time to produce a decision about the perceived dynamics of the input data. Results of this model provided a close fit to the behavioral data, suggesting a mechanism in the human visual system that approximates rational Bayesian inference to integrate position and orientation signals in dynamic form analysis.

  20. The TINS Lecture. The parietal association cortex in depth perception and visual control of hand action.

    PubMed

    Sakata, H; Taira, M; Kusunoki, M; Murata, A; Tanaka, Y

    1997-08-01

    Recent neurophysiological studies in alert monkeys have revealed that the parietal association cortex plays a crucial role in depth perception and visually guided hand movement. The following five classes of parietal neurons covering various aspects of these functions have been identified: (1) depth-selective visual-fixation (VF) neurons of the inferior parietal lobule (IPL), representing egocentric distance; (2) depth-movement sensitive (DMS) neurons of V5A and the ventral intraparietal (VIP) area representing direction of linear movement in 3-D space; (3) depth-rotation-sensitive (RS) neurons of V5A and the posterior parietal (PP) area representing direction of rotary movement in space; (4) visually responsive manipulation-related neurons (visual-dominant or visual-and-motor type) of the anterior intraparietal (AIP) area, representing 3-D shape or orientation (or both) of objects for manipulation; and (5) axis-orientation-selective (AOS) and surface-orientation-selective (SOS) neurons in the caudal intraparietal sulcus (cIPS) sensitive to binocular disparity and representing the 3-D orientation of the longitudinal axes and flat surfaces, respectively. Some AOS and SOS neurons are selective in both orientation and shape. Thus the dorsal visual pathway is divided into at least two subsystems, V5A, PP and VIP areas for motion vision and V6, LIP and cIPS areas for coding position and 3-D features. The cIPS sends the signals of 3-D features of objects to the AIP area, which is reciprocally connected to the ventral premotor (F5) area and plays an essential role in matching hand orientation and shaping with 3-D objects for manipulation.

  1. Orienting Attention to Sound Object Representations Attenuates Change Deafness

    ERIC Educational Resources Information Center

    Backer, Kristina C.; Alain, Claude

    2012-01-01

    According to the object-based account of attention, multiple objects coexist in short-term memory (STM), and we can selectively attend to a particular object of interest. Although there is evidence that attention can be directed to visual object representations, the assumption that attention can be oriented to sound object representations has yet…

  2. OOPs!

    ERIC Educational Resources Information Center

    Margush, Tim

    2001-01-01

    Discussion of Object Oriented Programming (OOP) focuses on criticism of an earlier article that addressed problems of applying specific functionality to controls across several forms in a Visual Basic project. Examines the Object Oriented techniques, inheritance and composition, commonly employed to extend the functionality of an object.…

  3. Average Orientation Is More Accessible through Object Boundaries than Surface Features

    ERIC Educational Resources Information Center

    Choo, Heeyoung; Levinthal, Brian R.; Franconeri, Steven L.

    2012-01-01

    In a glance, the visual system can provide a summary of some kinds of information about objects in a scene. We explore how summary information about "orientation" is extracted and find that some representations of orientation are privileged over others. Participants judged the average orientation of either a set of 6 bars or 6 circular…

  4. The Visual Representation of 3D Object Orientation in Parietal Cortex

    PubMed Central

    Cowan, Noah J.; Angelaki, Dora E.

    2013-01-01

    An accurate representation of three-dimensional (3D) object orientation is essential for interacting with the environment. Where and how the brain visually encodes 3D object orientation remains unknown, but prior studies suggest the caudal intraparietal area (CIP) may be involved. Here, we develop rigorous analytical methods for quantifying 3D orientation tuning curves, and use these tools to the study the neural coding of surface orientation. Specifically, we show that single neurons in area CIP of the rhesus macaque jointly encode the slant and tilt of a planar surface, and that across the population, the distribution of preferred slant-tilts is not statistically different from uniform. This suggests that all slant-tilt combinations are equally represented in area CIP. Furthermore, some CIP neurons are found to also represent the third rotational degree of freedom that determines the orientation of the image pattern on the planar surface. Together, the present results suggest that CIP is a critical neural locus for the encoding of all three rotational degrees of freedom specifying an object's 3D spatial orientation. PMID:24305830

  5. The effects of short-term and long-term learning on the responses of lateral intraparietal neurons to visually presented objects.

    PubMed

    Sigurdardottir, Heida M; Sheinberg, David L

    2015-07-01

    The lateral intraparietal area (LIP) is thought to play an important role in the guidance of where to look and pay attention. LIP can also respond selectively to differently shaped objects. We sought to understand to what extent short-term and long-term experience with visual orienting determines the responses of LIP to objects of different shapes. We taught monkeys to arbitrarily associate centrally presented objects of various shapes with orienting either toward or away from a preferred spatial location of a neuron. The training could last for less than a single day or for several months. We found that neural responses to objects are affected by such experience, but that the length of the learning period determines how this neural plasticity manifests. Short-term learning affects neural responses to objects, but these effects are only seen relatively late after visual onset; at this time, the responses to newly learned objects resemble those of familiar objects that share their meaning or arbitrary association. Long-term learning affects the earliest bottom-up responses to visual objects. These responses tend to be greater for objects that have been associated with looking toward, rather than away from, LIP neurons' preferred spatial locations. Responses to objects can nonetheless be distinct, although they have been similarly acted on in the past and will lead to the same orienting behavior in the future. Our results therefore indicate that a complete experience-driven override of LIP object responses may be difficult or impossible. We relate these results to behavioral work on visual attention.

  6. Visualizing multiattribute Web transactions using a freeze technique

    NASA Astrophysics Data System (ADS)

    Hao, Ming C.; Cotting, Daniel; Dayal, Umeshwar; Machiraju, Vijay; Garg, Pankaj

    2003-05-01

    Web transactions are multidimensional and have a number of attributes: client, URL, response times, and numbers of messages. One of the key questions is how to simultaneously lay out in a graph the multiple relationships, such as the relationships between the web client response times and URLs in a web access application. In this paper, we describe a freeze technique to enhance a physics-based visualization system for web transactions. The idea is to freeze one set of objects before laying out the next set of objects during the construction of the graph. As a result, we substantially reduce the force computation time. This technique consists of three steps: automated classification, a freeze operation, and a graph layout. These three steps are iterated until the final graph is generated. This iterated-freeze technique has been prototyped in several e-service applications at Hewlett Packard Laboratories. It has been used to visually analyze large volumes of service and sales transactions at online web sites.

  7. Background Oriented Schlieren Using Celestial Objects

    NASA Technical Reports Server (NTRS)

    Haering, Edward, A., Jr. (Inventor); Hill, Michael A (Inventor)

    2017-01-01

    The present invention is a system and method of visualizing fluid flow around an object, such as an aircraft or wind turbine, by aligning the object between an imaging system and a celestial object having a speckled background, taking images, and comparing those images to obtain fluid flow visualization.

  8. Visualization: a tool for enhancing students' concept images of basic object-oriented concepts

    NASA Astrophysics Data System (ADS)

    Cetin, Ibrahim

    2013-03-01

    The purpose of this study was twofold: to investigate students' concept images about class, object, and their relationship and to help them enhance their learning of these notions with a visualization tool. Fifty-six second-year university students participated in the study. To investigate his/her concept images, the researcher developed a survey including open-ended questions, which was administered to the participants. Follow-up interviews with 12 randomly selected students were conducted to explore their answers to the survey in depth. The results of the first part of the research were utilized to construct visualization scenarios. The students used these scenarios to develop animations using Flash software. The study found that most of the students experienced difficulties in learning object-oriented notions. Overdependence on code-writing practice and examples and incorrectly learned analogies were determined to be the sources of their difficulties. Moreover, visualization was found to be a promising approach in facilitating students' concept images of basic object-oriented notions. The results of this study have implications for researchers and practitioners when designing programming instruction.

  9. Virtual environment display for a 3D audio room simulation

    NASA Astrophysics Data System (ADS)

    Chapin, William L.; Foster, Scott

    1992-06-01

    Recent developments in virtual 3D audio and synthetic aural environments have produced a complex acoustical room simulation. The acoustical simulation models a room with walls, ceiling, and floor of selected sound reflecting/absorbing characteristics and unlimited independent localizable sound sources. This non-visual acoustic simulation, implemented with 4 audio ConvolvotronsTM by Crystal River Engineering and coupled to the listener with a Poihemus IsotrakTM, tracking the listener's head position and orientation, and stereo headphones returning binaural sound, is quite compelling to most listeners with eyes closed. This immersive effect should be reinforced when properly integrated into a full, multi-sensory virtual environment presentation. This paper discusses the design of an interactive, visual virtual environment, complementing the acoustic model and specified to: 1) allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; 2) reinforce the listener's feeling of telepresence into the acoustical environment with visual and proprioceptive sensations; 3) enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and 4) serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations. The installed system implements a head-coupled, wide-angle, stereo-optic tracker/viewer and multi-computer simulation control. The portable demonstration system implements a head-mounted wide-angle, stereo-optic display, separate head and pointer electro-magnetic position trackers, a heterogeneous parallel graphics processing system, and object oriented C++ program code.

  10. Supervised guiding long-short term memory for image caption generation based on object classes

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Cao, Zhiguo; Xiao, Yang; Qi, Xinyuan

    2018-03-01

    The present models of image caption generation have the problems of image visual semantic information attenuation and errors in guidance information. In order to solve these problems, we propose a supervised guiding Long Short Term Memory model based on object classes, named S-gLSTM for short. It uses the object detection results from R-FCN as supervisory information with high confidence, and updates the guidance word set by judging whether the last output matches the supervisory information. S-gLSTM learns how to extract the current interested information from the image visual se-mantic information based on guidance word set. The interested information is fed into the S-gLSTM at each iteration as guidance information, to guide the caption generation. To acquire the text-related visual semantic information, the S-gLSTM fine-tunes the weights of the network through the back-propagation of the guiding loss. Complementing guidance information at each iteration solves the problem of visual semantic information attenuation in the traditional LSTM model. Besides, the supervised guidance information in our model can reduce the impact of the mismatched words on the caption generation. We test our model on MSCOCO2014 dataset, and obtain better performance than the state-of-the- art models.

  11. A comparison of visuomotor cue integration strategies for object placement and prehension.

    PubMed

    Greenwald, Hal S; Knill, David C

    2009-01-01

    Visual cue integration strategies are known to depend on cue reliability and how rapidly the visual system processes incoming information. We investigated whether these strategies also depend on differences in the information demands for different natural tasks. Using two common goal-oriented tasks, prehension and object placement, we determined whether monocular and binocular information influence estimates of three-dimensional (3D) orientation differently depending on task demands. Both tasks rely on accurate 3D orientation estimates, but 3D position is potentially more important for grasping. Subjects placed an object on or picked up a disc in a virtual environment. On some trials, the monocular cues (aspect ratio and texture compression) and binocular cues (e.g., binocular disparity) suggested slightly different 3D orientations for the disc; these conflicts either were present upon initial stimulus presentation or were introduced after movement initiation, which allowed us to quantify how information from the cues accumulated over time. We analyzed the time-varying orientations of subjects' fingers in the grasping task and those of the object in the object placement task to quantify how different visual cues influenced motor control. In the first experiment, different subjects performed each task, and those performing the grasping task relied on binocular information more when orienting their hands than those performing the object placement task. When subjects in the second experiment performed both tasks in interleaved sessions, binocular cues were still more influential during grasping than object placement, and the different cue integration strategies observed for each task in isolation were maintained. In both experiments, the temporal analyses showed that subjects processed binocular information faster than monocular information, but task demands did not affect the time course of cue processing. How one uses visual cues for motor control depends on the task being performed, although how quickly the information is processed appears to be task invariant.

  12. Functional implications of orientation maps in primary visual cortex

    NASA Astrophysics Data System (ADS)

    Koch, Erin; Jin, Jianzhong; Alonso, Jose M.; Zaidi, Qasim

    2016-11-01

    Stimulus orientation in the primary visual cortex of primates and carnivores is mapped as iso-orientation domains radiating from pinwheel centres, where orientation preferences of neighbouring cells change circularly. Whether this orientation map has a function is currently debated, because many mammals, such as rodents, do not have such maps. Here we show that two fundamental properties of visual cortical responses, contrast saturation and cross-orientation suppression, are stronger within cat iso-orientation domains than at pinwheel centres. These differences develop when excitation (not normalization) from neighbouring oriented neurons is applied to different cortical orientation domains and then balanced by inhibition from un-oriented neurons. The functions of the pinwheel mosaic emerge from these local intra-cortical computations: Narrower tuning, greater cross-orientation suppression and higher contrast gain of iso-orientation cells facilitate extraction of object contours from images, whereas broader tuning, greater linearity and less suppression of pinwheel cells generate selectivity for surface patterns and textures.

  13. A Survey of Parents of Children with Cortical or Cerebral Visual Impairment

    ERIC Educational Resources Information Center

    Jackel, Bernadette; Wilson, Michelle; Hartmann, Elizabeth

    2010-01-01

    Cortical or cerebral visual impairment (CVI) can result when the visual pathways and visual processing areas of the brain have been damaged. Children with CVI may have difficulty finding an object among other objects, viewing in the distance, orienting themselves in space, going from grass to pavement or other changes in surface, and copying…

  14. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less

  15. An object-oriented framework for medical image registration, fusion, and visualization.

    PubMed

    Zhu, Yang-Ming; Cochoff, Steven M

    2006-06-01

    An object-oriented framework for image registration, fusion, and visualization was developed based on the classic model-view-controller paradigm. The framework employs many design patterns to facilitate legacy code reuse, manage software complexity, and enhance the maintainability and portability of the framework. Three sample applications built a-top of this framework are illustrated to show the effectiveness of this framework: the first one is for volume image grouping and re-sampling, the second one is for 2D registration and fusion, and the last one is for visualization of single images as well as registered volume images.

  16. Parietal and frontal object areas underlie perception of object orientation in depth.

    PubMed

    Niimi, Ryosuke; Saneyoshi, Ayako; Abe, Reiko; Kaminaga, Tatsuro; Yokosawa, Kazuhiko

    2011-05-27

    Recent studies have shown that the human parietal and frontal cortices are involved in object image perception. We hypothesized that the parietal/frontal object areas play a role in differentiating the orientations (i.e., views) of an object. By using functional magnetic resonance imaging, we compared brain activations while human observers differentiated between two object images in depth-orientation (orientation task) and activations while they differentiated the images in object identity (identity task). The left intraparietal area, right angular gyrus, and right inferior frontal areas were activated more for the orientation task than for the identity task. The occipitotemporal object areas, however, were activated equally for the two tasks. No region showed greater activation for the identity task. These results suggested that the parietal/frontal object areas encode view-dependent visual features and underlie object orientation perception. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  17. Mark Tracking: Position/orientation measurements using 4-circle mark and its tracking experiments

    NASA Technical Reports Server (NTRS)

    Kanda, Shinji; Okabayashi, Keijyu; Maruyama, Tsugito; Uchiyama, Takashi

    1994-01-01

    Future space robots require position and orientation tracking with visual feedback control to track and capture floating objects and satellites. We developed a four-circle mark that is useful for this purpose. With this mark, four geometric center positions as feature points can be extracted from the mark by simple image processing. We also developed a position and orientation measurement method that uses the four feature points in our mark. The mark gave good enough image measurement accuracy to let space robots approach and contact objects. A visual feedback control system using this mark enabled a robot arm to track a target object accurately. The control system was able to tolerate a time delay of 2 seconds.

  18. Speed Limits: Orientation and Semantic Context Interactions Constrain Natural Scene Discrimination Dynamics

    ERIC Educational Resources Information Center

    Rieger, Jochem W.; Kochy, Nick; Schalk, Franziska; Gruschow, Marcus; Heinze, Hans-Jochen

    2008-01-01

    The visual system rapidly extracts information about objects from the cluttered natural environment. In 5 experiments, the authors quantified the influence of orientation and semantics on the classification speed of objects in natural scenes, particularly with regard to object-context interactions. Natural scene photographs were presented in an…

  19. Exploring the Synergies between the Object Oriented Paradigm and Mathematics: A Java Led Approach

    ERIC Educational Resources Information Center

    Conrad, Marc; French, Tim

    2004-01-01

    While the object oriented paradigm and its instantiation within programming languages such as Java has become a ubiquitous part of both the commercial and educational landscapes, its usage as a visualization technique within mathematics undergraduate programmes of study has perhaps been somewhat underestimated. By regarding the object oriented…

  20. Object shape and orientation do not routinely influence performance during language processing.

    PubMed

    Rommers, Joost; Meyer, Antje S; Huettig, Falk

    2013-11-01

    The role of visual representations during language processing remains unclear: They could be activated as a necessary part of the comprehension process, or they could be less crucial and influence performance in a task-dependent manner. In the present experiments, participants read sentences about an object. The sentences implied that the object had a specific shape or orientation. They then either named a picture of that object (Experiments 1 and 3) or decided whether the object had been mentioned in the sentence (Experiment 2). Orientation information did not reliably influence performance in any of the experiments. Shape representations influenced performance most strongly when participants were asked to compare a sentence with a picture or when they were explicitly asked to use mental imagery while reading the sentences. Thus, in contrast to previous claims, implied visual information often does not contribute substantially to the comprehension process during normal reading.

  1. Prototyping Visual Database Interface by Object-Oriented Language

    DTIC Science & Technology

    1988-06-01

    approach is to use object-oriented programming. Object-oriented languages are characterized by three criteria [Ref. 4:p. 1.2.1]: - encapsulation of...made it a sub-class of our DMWindow.Cls, which is discussed later in this chapter. This extension to the application had to be intergrated with our... abnormal behaviors similar to Korth’s discussion of pitfalls in relational database designing. Even extensions like GEM [Ref. 8] that are powerful and

  2. MFV-class: a multi-faceted visualization tool of object classes.

    PubMed

    Zhang, Zhi-meng; Pan, Yun-he; Zhuang, Yue-ting

    2004-11-01

    Classes are key software components in an object-oriented software system. In many industrial OO software systems, there are some classes that have complicated structure and relationships. So in the processes of software maintenance, testing, software reengineering, software reuse and software restructure, it is a challenge for software engineers to understand these classes thoroughly. This paper proposes a class comprehension model based on constructivist learning theory, and implements a software visualization tool (MFV-Class) to help in the comprehension of a class. The tool provides multiple views of class to uncover manifold facets of class contents. It enables visualizing three object-oriented metrics of classes to help users focus on the understanding process. A case study was conducted to evaluate our approach and the toolkit.

  3. Patterns and Trajectories in Williams Syndrome: The Case of Visual Orientation Discrimination

    ERIC Educational Resources Information Center

    Palomares, Melanie; Englund, Julia A.; Ahlers, Stephanie

    2011-01-01

    Williams Syndrome (WS) is a developmental disorder typified by deficits in visuospatial cognition. To understand the nature of this deficit, we characterized how people with WS perceive visual orientation, a fundamental ability related to object identification. We compared WS participants to typically developing children (3-6 years of age) and…

  4. Coding of spatial attention priorities and object features in the macaque lateral intraparietal cortex.

    PubMed

    Levichkina, Ekaterina; Saalmann, Yuri B; Vidyasagar, Trichur R

    2017-03-01

    Primate posterior parietal cortex (PPC) is known to be involved in controlling spatial attention. Neurons in one part of the PPC, the lateral intraparietal area (LIP), show enhanced responses to objects at attended locations. Although many are selective for object features, such as the orientation of a visual stimulus, it is not clear how LIP circuits integrate feature-selective information when providing attentional feedback about behaviorally relevant locations to the visual cortex. We studied the relationship between object feature and spatial attention properties of LIP cells in two macaques by measuring the cells' orientation selectivity and the degree of attentional enhancement while performing a delayed match-to-sample task. Monkeys had to match both the location and orientation of two visual gratings presented separately in time. We found a wide range in orientation selectivity and degree of attentional enhancement among LIP neurons. However, cells with significant attentional enhancement had much less orientation selectivity in their response than cells which showed no significant modulation by attention. Additionally, orientation-selective cells showed working memory activity for their preferred orientation, whereas cells showing attentional enhancement also synchronized with local neuronal activity. These results are consistent with models of selective attention incorporating two stages, where an initial feature-selective process guides a second stage of focal spatial attention. We suggest that LIP contributes to both stages, where the first stage involves orientation-selective LIP cells that support working memory of the relevant feature, and the second stage involves attention-enhanced LIP cells that synchronize to provide feedback on spatial priorities. © 2017 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of The Physiological Society and the American Physiological Society.

  5. Coherence of structural visual cues and pictorial gravity paves the way for interceptive actions.

    PubMed

    Zago, Myrka; La Scaleia, Barbara; Miller, William L; Lacquaniti, Francesco

    2011-09-20

    Dealing with upside-down objects is difficult and takes time. Among the cues that are critical for defining object orientation, the visible influence of gravity on the object's motion has received limited attention. Here, we manipulated the alignment of visible gravity and structural visual cues between each other and relative to the orientation of the observer and physical gravity. Participants pressed a button triggering a hitter to intercept a target accelerated by a virtual gravity. A factorial design assessed the effects of scene orientation (normal or inverted) and target gravity (normal or inverted). We found that interception was significantly more successful when scene direction was concordant with target gravity direction, irrespective of whether both were upright or inverted. This was so independent of the hitter type and when performance feedback to the participants was either available (Experiment 1) or unavailable (Experiment 2). These results show that the combined influence of visible gravity and structural visual cues can outweigh both physical gravity and viewer-centered cues, leading to rely instead on the congruence of the apparent physical forces acting on people and objects in the scene.

  6. Perceptual Discrimination of Basic Object Features Is Not Facilitated When Priming Stimuli Are Prevented From Reaching Awareness by Means of Visual Masking

    PubMed Central

    Peel, Hayden J.; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A.

    2018-01-01

    Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features. PMID:29725292

  7. Perceptual Discrimination of Basic Object Features Is Not Facilitated When Priming Stimuli Are Prevented From Reaching Awareness by Means of Visual Masking.

    PubMed

    Peel, Hayden J; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A

    2018-01-01

    Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features.

  8. A coarse-to-fine kernel matching approach for mean-shift based visual tracking

    NASA Astrophysics Data System (ADS)

    Liangfu, L.; Zuren, F.; Weidong, C.; Ming, J.

    2009-03-01

    Mean shift is an efficient pattern match algorithm. It is widely used in visual tracking fields since it need not perform whole search in the image space. It employs gradient optimization method to reduce the time of feature matching and realize rapid object localization, and uses Bhattacharyya coefficient as the similarity measure between object template and candidate template. This thesis presents a mean shift algorithm based on coarse-to-fine search for the best kernel matching. This paper researches for object tracking with large motion area based on mean shift. To realize efficient tracking of such an object, we present a kernel matching method from coarseness to fine. If the motion areas of the object between two frames are very large and they are not overlapped in image space, then the traditional mean shift method can only obtain local optimal value by iterative computing in the old object window area, so the real tracking position cannot be obtained and the object tracking will be disabled. Our proposed algorithm can efficiently use a similarity measure function to realize the rough location of motion object, then use mean shift method to obtain the accurate local optimal value by iterative computing, which successfully realizes object tracking with large motion. Experimental results show its good performance in accuracy and speed when compared with background-weighted histogram algorithm in the literature.

  9. Visual Salience in the Change Detection Paradigm: The Special Role of Object Onset

    ERIC Educational Resources Information Center

    Cole, Geoff G.; Kentridge, Robert W.; Heywood, Charles A.

    2004-01-01

    The relative efficacy with which appearance of a new object orients visual attention was investigated. At issue is whether the visual system treats onset as being of particular importance or only 1 of a number of stimulus events equally likely to summon attention. Using the 1-shot change detection paradigm, the authors compared detectability of…

  10. Early Visual Cortex Dynamics during Top-Down Modulated Shifts of Feature-Selective Attention.

    PubMed

    Müller, Matthias M; Trautmann, Mireille; Keitel, Christian

    2016-04-01

    Shifting attention from one color to another color or from color to another feature dimension such as shape or orientation is imperative when searching for a certain object in a cluttered scene. Most attention models that emphasize feature-based selection implicitly assume that all shifts in feature-selective attention underlie identical temporal dynamics. Here, we recorded time courses of behavioral data and steady-state visual evoked potentials (SSVEPs), an objective electrophysiological measure of neural dynamics in early visual cortex to investigate temporal dynamics when participants shifted attention from color or orientation toward color or orientation, respectively. SSVEPs were elicited by four random dot kinematograms that flickered at different frequencies. Each random dot kinematogram was composed of dashes that uniquely combined two features from the dimensions color (red or blue) and orientation (slash or backslash). Participants were cued to attend to one feature (such as color or orientation) and respond to coherent motion targets of the to-be-attended feature. We found that shifts toward color occurred earlier after the shifting cue compared with shifts toward orientation, regardless of the original feature (i.e., color or orientation). This was paralleled in SSVEP amplitude modulations as well as in the time course of behavioral data. Overall, our results suggest different neural dynamics during shifts of attention from color and orientation and the respective shifting destinations, namely, either toward color or toward orientation.

  11. When apperceptive agnosia is explained by a deficit of primary visual processing.

    PubMed

    Serino, Andrea; Cecere, Roberto; Dundon, Neil; Bertini, Caterina; Sanchez-Castaneda, Cristina; Làdavas, Elisabetta

    2014-03-01

    Visual agnosia is a deficit in shape perception, affecting figure, object, face and letter recognition. Agnosia is usually attributed to lesions to high-order modules of the visual system, which combine visual cues to represent the shape of objects. However, most of previously reported agnosia cases presented visual field (VF) defects and poor primary visual processing. The present case-study aims to verify whether form agnosia could be explained by a deficit in basic visual functions, rather that by a deficit in high-order shape recognition. Patient SDV suffered a bilateral lesion of the occipital cortex due to anoxia. When tested, he could navigate, interact with others, and was autonomous in daily life activities. However, he could not recognize objects from drawings and figures, read or recognize familiar faces. He was able to recognize objects by touch and people from their voice. Assessments of visual functions showed blindness at the centre of the VF, up to almost 5°, bilaterally, with better stimulus detection in the periphery. Colour and motion perception was preserved. Psychophysical experiments showed that SDV's visual recognition deficits were not explained by poor spatial acuity or by the crowding effect. Rather a severe deficit in line orientation processing might be a key mechanism explaining SDV's agnosia. Line orientation processing is a basic function of primary visual cortex neurons, necessary for detecting "edges" of visual stimuli to build up a "primal sketch" for object recognition. We propose, therefore, that some forms of visual agnosia may be explained by deficits in basic visual functions due to widespread lesions of the primary visual areas, affecting primary levels of visual processing. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Contextual effects on perceived contrast: figure-ground assignment and orientation contrast.

    PubMed

    Self, Matthew W; Mookhoek, Aart; Tjalma, Nienke; Roelfsema, Pieter R

    2015-02-02

    Figure-ground segregation is an important step in the path leading to object recognition. The visual system segregates objects ('figures') in the visual scene from their backgrounds ('ground'). Electrophysiological studies in awake-behaving monkeys have demonstrated that neurons in early visual areas increase their firing rate when responding to a figure compared to responding to the background. We hypothesized that similar changes in neural firing would take place in early visual areas of the human visual system, leading to changes in the perception of low-level visual features. In this study, we investigated whether contrast perception is affected by figure-ground assignment using stimuli similar to those in the electrophysiological studies in monkeys. We measured contrast discrimination thresholds and perceived contrast for Gabor probes placed on figures or the background and found that the perceived contrast of the probe was increased when it was placed on a figure. Furthermore, we tested how this effect compared with the well-known effect of orientation contrast on perceived contrast. We found that figure-ground assignment and orientation contrast produced changes in perceived contrast of a similar magnitude, and that they interacted. Our results demonstrate that figure-ground assignment influences perceived contrast, consistent with an effect of figure-ground assignment on activity in early visual areas of the human visual system. © 2015 ARVO.

  13. Visual recognition and inference using dynamic overcomplete sparse learning.

    PubMed

    Murray, Joseph F; Kreutz-Delgado, Kenneth

    2007-09-01

    We present a hierarchical architecture and learning algorithm for visual recognition and other visual inference tasks such as imagination, reconstruction of occluded images, and expectation-driven segmentation. Using properties of biological vision for guidance, we posit a stochastic generative world model and from it develop a simplified world model (SWM) based on a tractable variational approximation that is designed to enforce sparse coding. Recent developments in computational methods for learning overcomplete representations (Lewicki & Sejnowski, 2000; Teh, Welling, Osindero, & Hinton, 2003) suggest that overcompleteness can be useful for visual tasks, and we use an overcomplete dictionary learning algorithm (Kreutz-Delgado, et al., 2003) as a preprocessing stage to produce accurate, sparse codings of images. Inference is performed by constructing a dynamic multilayer network with feedforward, feedback, and lateral connections, which is trained to approximate the SWM. Learning is done with a variant of the back-propagation-through-time algorithm, which encourages convergence to desired states within a fixed number of iterations. Vision tasks require large networks, and to make learning efficient, we take advantage of the sparsity of each layer to update only a small subset of elements in a large weight matrix at each iteration. Experiments on a set of rotated objects demonstrate various types of visual inference and show that increasing the degree of overcompleteness improves recognition performance in difficult scenes with occluded objects in clutter.

  14. Jini service to reconstruct tomographic data

    NASA Astrophysics Data System (ADS)

    Knoll, Peter; Mirzaei, S.; Koriska, K.; Koehn, H.

    2002-06-01

    A number of imaging systems rely on the reconstruction of a 3- dimensional model from its projections through the process of computed tomography (CT). In medical imaging, for example magnetic resonance imaging (MRI), positron emission tomography (PET), and Single Computer Tomography (SPECT) acquire two-dimensional projections of a three dimensional projections of a three dimensional object. In order to calculate the 3-dimensional representation of the object, i.e. its voxel distribution, several reconstruction algorithms have been developed. Currently, mainly two reconstruct use: the filtered back projection(FBP) and iterative methods. Although the quality of iterative reconstructed SPECT slices is better than that of FBP slices, such iterative algorithms are rarely used for clinical routine studies because of their low availability and increased reconstruction time. We used Jini and a self-developed iterative reconstructions algorithm to design and implement a Jini reconstruction service. With this service, the physician selects the patient study from a database and a Jini client automatically discovers the registered Jini reconstruction services in the department's Intranet. After downloading the proxy object the this Jini service, the SPECT acquisition data are reconstructed. The resulting transaxial slices are visualized using a Jini slice viewer, which can be used for various imaging modalities.

  15. Parameterized hardware description as object oriented hardware model implementation

    NASA Astrophysics Data System (ADS)

    Drabik, Pawel K.

    2010-09-01

    The paper introduces novel model for design, visualization and management of complex, highly adaptive hardware systems. The model settles component oriented environment for both hardware modules and software application. It is developed on parameterized hardware description research. Establishment of stable link between hardware and software, as a purpose of designed and realized work, is presented. Novel programming framework model for the environment, named Graphic-Functional-Components is presented. The purpose of the paper is to present object oriented hardware modeling with mentioned features. Possible model implementation in FPGA chips and its management by object oriented software in Java is described.

  16. The Synaptic and Morphological Basis of Orientation Selectivity in a Polyaxonal Amacrine Cell of the Rabbit Retina.

    PubMed

    Murphy-Baum, Benjamin L; Taylor, W Rowland

    2015-09-30

    Much of the computational power of the retina derives from the activity of amacrine cells, a large and diverse group of GABAergic and glycinergic inhibitory interneurons. Here, we identify an ON-type orientation-selective, wide-field, polyaxonal amacrine cell (PAC) in the rabbit retina and demonstrate how its orientation selectivity arises from the structure of the dendritic arbor and the pattern of excitatory and inhibitory inputs. Excitation from ON bipolar cells and inhibition arising from the OFF pathway converge to generate a quasi-linear integration of visual signals in the receptive field center. This serves to suppress responses to high spatial frequencies, thereby improving sensitivity to larger objects and enhancing orientation selectivity. Inhibition also regulates the magnitude and time course of excitatory inputs to this PAC through serial inhibitory connections onto the presynaptic terminals of ON bipolar cells. This presynaptic inhibition is driven by graded potentials within local microcircuits, similar in extent to the size of single bipolar cell receptive fields. Additional presynaptic inhibition is generated by spiking amacrine cells on a larger spatial scale covering several hundred microns. The orientation selectivity of this PAC may be a substrate for the inhibition that mediates orientation selectivity in some types of ganglion cells. Significance statement: The retina comprises numerous excitatory and inhibitory circuits that encode specific features in the visual scene, such as orientation, contrast, or motion. Here, we identify a wide-field inhibitory neuron that responds to visual stimuli of a particular orientation, a feature selectivity that is primarily due to the elongated shape of the dendritic arbor. Integration of convergent excitatory and inhibitory inputs from the ON and OFF visual pathways suppress responses to small objects and fine textures, thus enhancing selectivity for larger objects. Feedback inhibition regulates the strength and speed of excitation on both local and wide-field spatial scales. This study demonstrates how different synaptic inputs are regulated to tune a neuron to respond to specific features in the visual scene. Copyright © 2015 the authors 0270-6474/15/3513336-15$15.00/0.

  17. "Commentary": Object and Spatial Visualization in Geosciences

    ERIC Educational Resources Information Center

    Kastens, Kim

    2010-01-01

    Cognitive science research shows that the brain has two systems for processing visual information, one specialized for spatial information such as position, orientation, and trajectory, and the other specialized for information used to identify objects, such as color, shape and texture. Some individuals seem to be more facile with the spatial…

  18. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less

  19. Three-quarter views are subjectively good because object orientation is uncertain.

    PubMed

    Niimi, Ryosuke; Yokosawa, Kazuhiko

    2009-04-01

    Because the objects that surround us are three-dimensional, their appearance and our visual perception of them change depending on an object's orientation relative to a viewpoint. One of the most remarkable effects of object orientation is that viewers prefer three-quarter views over others, such as front and back, but the exact source of this preference has not been firmly established. We show that object orientation perception of the three-quarter view is relatively imprecise and that this impreciseness is related to preference for this view. Human vision is largely insensitive to variations among different three-quarter views (e.g., 45 degrees vs. 50 degrees ); therefore, the three-quarter view is perceived as if it corresponds to a wide range of orientations. In other words, it functions as the typical representation of the object.

  20. From Flashes to Edges to Objects: Recovery of Local Edge Fragments Initiates Spatiotemporal Boundary Formation

    PubMed Central

    Erlikhman, Gennady; Kellman, Philip J.

    2016-01-01

    Spatiotemporal boundary formation (SBF) is the perception of illusory boundaries, global form, and global motion from spatially and temporally sparse transformations of texture elements (Shipley and Kellman, 1993a, 1994; Erlikhman and Kellman, 2015). It has been theorized that the visual system uses positions and times of element transformations to extract local oriented edge fragments, which then connect by known interpolation processes to produce larger contours and shapes in SBF. To test this theory, we created a novel display consisting of a sawtooth arrangement of elements that disappeared and reappeared sequentially. Although apparent motion along the sawtooth would be expected, with appropriate spacing and timing, the resulting percept was of a larger, moving, illusory bar. This display approximates the minimal conditions for visual perception of an oriented edge fragment from spatiotemporal information and confirms that such events may be initiating conditions in SBF. Using converging objective and subjective methods, experiments showed that edge formation in these displays was subject to a temporal integration constraint of ~80 ms between element disappearances. The experiments provide clear support for models of SBF that begin with extraction of local edge fragments, and they identify minimal conditions required for this process. We conjecture that these results reveal a link between spatiotemporal object perception and basic visual filtering. Motion energy filters have usually been studied with orientation given spatially by luminance contrast. When orientation is not given in static frames, these same motion energy filters serve as spatiotemporal edge filters, yielding local orientation from discrete element transformations over time. As numerous filters of different characteristic orientations and scales may respond to any simple SBF stimulus, we discuss the aperture and ambiguity problems that accompany this conjecture and how they might be resolved by the visual system. PMID:27445886

  1. Visual spatial cue use for guiding orientation in two-to-three-year-old children

    PubMed Central

    van den Brink, Danielle; Janzen, Gabriele

    2013-01-01

    In spatial development representations of the environment and the use of spatial cues change over time. To date, the influence of individual differences in skills relevant for orientation and navigation has not received much attention. The current study investigated orientation abilities on the basis of visual spatial cues in 2–3-year-old children, and assessed factors that possibly influence spatial task performance. Thirty-month and 35-month-olds performed an on-screen Virtual Reality (VR) orientation task searching for an animated target in the presence of visual self-movement cues and landmark information. Results show that, in contrast to 30-month-old children, 35-month-olds were successful in using visual spatial cues for maintaining orientation. Neither age group benefited from landmarks present in the environment, suggesting that successful task performance relied on the use of optic flow cues, rather than object-to-object relations. Analysis of individual differences revealed that 2-year-olds who were relatively more independent in comparison to their peers, as measured by the daily living skills scale of the parental questionnaire Vineland-Screener were most successful at the orientation task. These results support previous findings indicating that the use of various spatial cues gradually improves during early childhood. Our data show that a developmental transition in spatial cue use can be witnessed within a relatively short period of 5 months only. Furthermore, this study indicates that rather than chronological age, individual differences may play a role in successful use of visual cues for spatial updating in an orientation task. Future studies are necessary to assess the exact nature of these individual differences. PMID:24368903

  2. Visual spatial cue use for guiding orientation in two-to-three-year-old children.

    PubMed

    van den Brink, Danielle; Janzen, Gabriele

    2013-01-01

    In spatial development representations of the environment and the use of spatial cues change over time. To date, the influence of individual differences in skills relevant for orientation and navigation has not received much attention. The current study investigated orientation abilities on the basis of visual spatial cues in 2-3-year-old children, and assessed factors that possibly influence spatial task performance. Thirty-month and 35-month-olds performed an on-screen Virtual Reality (VR) orientation task searching for an animated target in the presence of visual self-movement cues and landmark information. Results show that, in contrast to 30-month-old children, 35-month-olds were successful in using visual spatial cues for maintaining orientation. Neither age group benefited from landmarks present in the environment, suggesting that successful task performance relied on the use of optic flow cues, rather than object-to-object relations. Analysis of individual differences revealed that 2-year-olds who were relatively more independent in comparison to their peers, as measured by the daily living skills scale of the parental questionnaire Vineland-Screener were most successful at the orientation task. These results support previous findings indicating that the use of various spatial cues gradually improves during early childhood. Our data show that a developmental transition in spatial cue use can be witnessed within a relatively short period of 5 months only. Furthermore, this study indicates that rather than chronological age, individual differences may play a role in successful use of visual cues for spatial updating in an orientation task. Future studies are necessary to assess the exact nature of these individual differences.

  3. Automatic extraction and visualization of object-oriented software design metrics

    NASA Astrophysics Data System (ADS)

    Lakshminarayana, Anuradha; Newman, Timothy S.; Li, Wei; Talburt, John

    2000-02-01

    Software visualization is a graphical representation of software characteristics and behavior. Certain modes of software visualization can be useful in isolating problems and identifying unanticipated behavior. In this paper we present a new approach to aid understanding of object- oriented software through 3D visualization of software metrics that can be extracted from the design phase of software development. The focus of the paper is a metric extraction method and a new collection of glyphs for multi- dimensional metric visualization. Our approach utilize the extensibility interface of a popular CASE tool to access and automatically extract the metrics from Unified Modeling Language class diagrams. Following the extraction of the design metrics, 3D visualization of these metrics are generated for each class in the design, utilizing intuitively meaningful 3D glyphs that are representative of the ensemble of metrics. Extraction and visualization of design metrics can aid software developers in the early study and understanding of design complexity.

  4. The Benefit of Surface Uniformity for Encoding Boundary Features in Visual Working Memory

    ERIC Educational Resources Information Center

    Kim, Sung-Ho; Kim, Jung-Oh

    2011-01-01

    Using a change detection paradigm, the present study examined an object-based encoding benefit in visual working memory (VWM) for two boundary features (two orientations in Experiments 1-2 and two shapes in Experiments 3-4) assigned to a single object. Participants remembered more boundary features when they were conjoined into a single object of…

  5. Updating of visual orientation in a gravity-based reference frame.

    PubMed

    Niehof, Nynke; Tramper, Julian J; Doeller, Christian F; Medendorp, W Pieter

    2017-10-01

    The brain can use multiple reference frames to code line orientation, including head-, object-, and gravity-centered references. If these frames change orientation, their representations must be updated to keep register with actual line orientation. We tested this internal updating during head rotation in roll, exploiting the rod-and-frame effect: The illusory tilt of a vertical line surrounded by a tilted visual frame. If line orientation is stored relative to gravity, these distortions should also affect the updating process. Alternatively, if coding is head- or frame-centered, updating errors should be related to the changes in their orientation. Ten subjects were instructed to memorize the orientation of a briefly flashed line, surrounded by a tilted visual frame, then rotate their head, and subsequently judge the orientation of a second line relative to the memorized first while the frame was upright. Results showed that updating errors were mostly related to the amount of subjective distortion of gravity at both the initial and final head orientation, rather than to the amount of intervening head rotation. In some subjects, a smaller part of the updating error was also related to the change of visual frame orientation. We conclude that the brain relies primarily on a gravity-based reference to remember line orientation during head roll.

  6. Location-coding account versus affordance-activation account in handle-to-hand correspondence effects: Evidence of Simon-like effects based on the coding of action direction.

    PubMed

    Pellicano, Antonello; Koch, Iring; Binkofski, Ferdinand

    2017-09-01

    An increasing number of studies have shown a close link between perception and action, which is supposed to be responsible for the automatic activation of actions compatible with objects' properties, such as the orientation of their graspable parts. It has been observed that left and right hand responses to objects (e.g., cups) are faster and more accurate if the handle orientation corresponds to the response location than when it does not. Two alternative explanations have been proposed for this handle-to-hand correspondence effect : location coding and affordance activation. The aim of the present study was to provide disambiguating evidence on the origin of this effect by employing object sets for which the visually salient portion was separated from, and opposite to the graspable 1, and vice versa. Seven experiments were conducted employing both single objects and object pairs as visual stimuli to enhance the contextual information about objects' graspability and usability. Notwithstanding these manipulations intended to favor affordance activation, results fully supported the location-coding account displaying significant Simon-like effects that involved the orientation of the visually salient portion of the object stimulus and the location of the response. Crucially, we provided evidence of Simon-like effects based on higher-level cognitive, iconic representations of action directions rather than based on lower-level spatial coding of the pure position of protruding portions of the visual stimuli. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Automatic Synthesis of UML Designs from Requirements in an Iterative Process

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Whittle, Jon; Clancy, Daniel (Technical Monitor)

    2001-01-01

    The Unified Modeling Language (UML) is gaining wide popularity for the design of object-oriented systems. UML combines various object-oriented graphical design notations under one common framework. A major factor for the broad acceptance of UML is that it can be conveniently used in a highly iterative, Use Case (or scenario-based) process (although the process is not a part of UML). Here, the (pre-) requirements for the software are specified rather informally as Use Cases and a set of scenarios. A scenario can be seen as an individual trace of a software artifact. Besides first sketches of a class diagram to illustrate the static system breakdown, scenarios are a favorite way of communication with the customer, because scenarios describe concrete interactions between entities and are thus easy to understand. Scenarios with a high level of detail are often expressed as sequence diagrams. Later in the design and implementation stage (elaboration and implementation phases), a design of the system's behavior is often developed as a set of statecharts. From there (and the full-fledged class diagram), actual code development is started. Current commercial UML tools support this phase by providing code generators for class diagrams and statecharts. In practice, it can be observed that the transition from requirements to design to code is a highly iterative process. In this talk, a set of algorithms is presented which perform reasonable synthesis and transformations between different UML notations (sequence diagrams, Object Constraint Language (OCL) constraints, statecharts). More specifically, we will discuss the following transformations: Statechart synthesis, introduction of hierarchy, consistency of modifications, and "design-debugging".

  8. Combining local and global limitations of visual search.

    PubMed

    Põder, Endel

    2017-04-01

    There are different opinions about the roles of local interactions and central processing capacity in visual search. This study attempts to clarify the problem using a new version of relevant set cueing. A central precue indicates two symmetrical segments (that may contain a target object) within a circular array of objects presented briefly around the fixation point. The number of objects in the relevant segments, and density of objects in the array were varied independently. Three types of search experiments were run: (a) search for a simple visual feature (color, size, and orientation); (b) conjunctions of simple features; and (c) spatial configuration of simple features (rotated Ts). For spatial configuration stimuli, the results were consistent with a fixed global processing capacity and standard crowding zones. For simple features and their conjunctions, the results were different, dependent on the features involved. While color search exhibits virtually no capacity limits or crowding, search for an orientation target was limited by both. Results for conjunctions of features can be partly explained by the results from the respective features. This study shows that visual search is limited by both local interference and global capacity, and the limitations are different for different visual features.

  9. Tradeoff between noise reduction and inartificial visualization in a model-based iterative reconstruction algorithm on coronary computed tomography angiography.

    PubMed

    Hirata, Kenichiro; Utsunomiya, Daisuke; Kidoh, Masafumi; Funama, Yoshinori; Oda, Seitaro; Yuki, Hideaki; Nagayama, Yasunori; Iyama, Yuji; Nakaura, Takeshi; Sakabe, Daisuke; Tsujita, Kenichi; Yamashita, Yasuyuki

    2018-05-01

    We aimed to evaluate the image quality performance of coronary CT angiography (CTA) under the different settings of forward-projected model-based iterative reconstruction solutions (FIRST).Thirty patients undergoing coronary CTA were included. Each image was reconstructed using filtered back projection (FBP), adaptive iterative dose reduction 3D (AIDR-3D), and 2 model-based iterative reconstructions including FIRST-body and FIRST-cardiac sharp (CS). CT number and noise were measured in the coronary vessels and plaque. Subjective image-quality scores were obtained for noise and structure visibility.In the objective image analysis, FIRST-body produced the significantly highest contrast-to-noise ratio. Regarding subjective image quality, FIRST-CS had the highest score for structure visibility, although the image noise score was inferior to that of FIRST-body.In conclusion, FIRST provides significant improvements in objective and subjective image quality compared with FBP and AIDR-3D. FIRST-body effectively reduces image noise, but the structure visibility with FIRST-CS was superior to FIRST-body.

  10. Object tracking based on harmony search: comparative study

    NASA Astrophysics Data System (ADS)

    Gao, Ming-Liang; He, Xiao-Hai; Luo, Dai-Sheng; Yu, Yan-Mei

    2012-10-01

    Visual tracking can be treated as an optimization problem. A new meta-heuristic optimal algorithm, Harmony Search (HS), was first applied to perform visual tracking by Fourie et al. As the authors point out, many subjects are still required in ongoing research. Our work is a continuation of Fourie's study, with four prominent improved variations of HS, namely Improved Harmony Search (IHS), Global-best Harmony Search (GHS), Self-adaptive Harmony Search (SHS) and Differential Harmony Search (DHS) adopted into the tracking system. Their performances are tested and analyzed on multiple challenging video sequences. Experimental results show that IHS is best, with DHS ranking second among the four improved trackers when the iteration number is small. However, the differences between all four reduced gradually, along with the increasing number of iterations.

  11. Concentration of Swiss Elite Orienteers.

    ERIC Educational Resources Information Center

    Seiler, Roland; Wetzel, Jorg

    1997-01-01

    A visual discrimination task was used to measure concentration among 43 members of Swiss national orienteering teams. Subjects were above average in the number of target objects dealt with and in duration of continuous concentration. For females only, ranking in orienteering performance was related to quality of concentration (ratio of correct to…

  12. Flow Visualization of Aircraft in Flight by Means of Background Oriented Schlieren Using Celestial Objects

    NASA Technical Reports Server (NTRS)

    Hill, Michael A.; Haering, Edward A., Jr.

    2017-01-01

    The Background Oriented Schlieren using Celestial Objects series of flights was undertaken in the spring of 2016 at National Aeronautics and Space Administration Armstrong Flight Research Center to further develop and improve a flow visualization technique which can be performed from the ground upon flying aircraft. Improved hardware and imaging techniques from previous schlieren tests were investigated. A United States Air Force T-38C and NASA B200 King Air aircraft were imaged eclipsing the sun at ranges varying from 2 to 6 nautical miles, at subsonic and supersonic speeds.

  13. The Relationship between Visual Attention and Visual Working Memory Encoding: A Dissociation between Covert and Overt Orienting

    PubMed Central

    Tas, A. Caglar; Luck, Steven J.; Hollingworth, Andrew

    2016-01-01

    There is substantial debate over whether visual working memory (VWM) and visual attention constitute a single system for the selection of task-relevant perceptual information or whether they are distinct systems that can be dissociated when their representational demands diverge. In the present study, we focused on the relationship between visual attention and the encoding of objects into visual working memory (VWM). Participants performed a color change-detection task. During the retention interval, a secondary object, irrelevant to the memory task, was presented. Participants were instructed either to execute an overt shift of gaze to this object (Experiments 1–3) or to attend it covertly (Experiments 4 and 5). Our goal was to determine whether these overt and covert shifts of attention disrupted the information held in VWM. We hypothesized that saccades, which typically introduce a memorial demand to bridge perceptual disruption, would lead to automatic encoding of the secondary object. However, purely covert shifts of attention, which introduce no such demand, would not result in automatic memory encoding. The results supported these predictions. Saccades to the secondary object produced substantial interference with VWM performance, but covert shifts of attention to this object produced no interference with VWM performance. These results challenge prevailing theories that consider attention and VWM to reflect a common mechanism. In addition, they indicate that the relationship between attention and VWM is dependent on the memorial demands of the orienting behavior. PMID:26854532

  14. Automatic estimation of retinal nerve fiber bundle orientation in SD-OCT images using a structure-oriented smoothing filter

    NASA Astrophysics Data System (ADS)

    Ghafaryasl, Babak; Baart, Robert; de Boer, Johannes F.; Vermeer, Koenraad A.; van Vliet, Lucas J.

    2017-02-01

    Optical coherence tomography (OCT) yields high-resolution, three-dimensional images of the retina. A better understanding of retinal nerve fiber bundle (RNFB) trajectories in combination with visual field data may be used for future diagnosis and monitoring of glaucoma. However, manual tracing of these bundles is a tedious task. In this work, we present an automatic technique to estimate the orientation of RNFBs from volumetric OCT scans. Our method consists of several steps, starting from automatic segmentation of the RNFL. Then, a stack of en face images around the posterior nerve fiber layer interface was extracted. The image showing the best visibility of RNFB trajectories was selected for further processing. After denoising the selected en face image, a semblance structure-oriented filter was applied to probe the strength of local linear structure in a discrete set of orientations creating an orientation space. Gaussian filtering along the orientation axis in this space is used to find the dominant orientation. Next, a confidence map was created to supplement the estimated orientation. This confidence map was used as pixel weight in normalized convolution to regularize the semblance filter response after which a new orientation estimate can be obtained. Finally, after several iterations an orientation field corresponding to the strongest local orientation was obtained. The RNFB orientations of six macular scans from three subjects were estimated. For all scans, visual inspection shows a good agreement between the estimated orientation fields and the RNFB trajectories in the en face images. Additionally, a good correlation between the orientation fields of two scans of the same subject was observed. Our method was also applied to a larger field of view around the macula. Manual tracing of the RNFB trajectories shows a good agreement with the automatically obtained streamlines obtained by fiber tracking.

  15. Tree growth visualization

    Treesearch

    L. Linsen; B.J. Karis; E.G. McPherson; B. Hamann

    2005-01-01

    In computer graphics, models describing the fractal branching structure of trees typically exploit the modularity of tree structures. The models are based on local production rules, which are applied iteratively and simultaneously to create a complex branching system. The objective is to generate three-dimensional scenes of often many realistic- looking and non-...

  16. Effect of tDCS on task relevant and irrelevant perceptual learning of complex objects.

    PubMed

    Van Meel, Chayenne; Daniels, Nicky; de Beeck, Hans Op; Baeck, Annelies

    2016-01-01

    During perceptual learning the visual representations in the brain are altered, but these changes' causal role has not yet been fully characterized. We used transcranial direct current stimulation (tDCS) to investigate the role of higher visual regions in lateral occipital cortex (LO) in perceptual learning with complex objects. We also investigated whether object learning is dependent on the relevance of the objects for the learning task. Participants were trained in two tasks: object recognition using a backward masking paradigm and an orientation judgment task. During both tasks, an object with a red line on top of it were presented in each trial. The crucial difference between both tasks was the relevance of the object: the object was relevant for the object recognition task, but not for the orientation judgment task. During training, half of the participants received anodal tDCS stimulation targeted at the lateral occipital cortex (LO). Afterwards, participants were tested on how well they recognized the trained objects, the irrelevant objects presented during the orientation judgment task and a set of completely new objects. Participants stimulated with tDCS during training showed larger improvements of performance compared to participants in the sham condition. No learning effect was found for the objects presented during the orientation judgment task. To conclude, this study suggests a causal role of LO in relevant object learning, but given the rather low spatial resolution of tDCS, more research on the specificity of this effect is needed. Further, mere exposure is not sufficient to train object recognition in our paradigm.

  17. Visualization: A Tool for Enhancing Students' Concept Images of Basic Object-Oriented Concepts

    ERIC Educational Resources Information Center

    Cetin, Ibrahim

    2013-01-01

    The purpose of this study was twofold: to investigate students' concept images about class, object, and their relationship and to help them enhance their learning of these notions with a visualization tool. Fifty-six second-year university students participated in the study. To investigate his/her concept images, the researcher developed a survey…

  18. Multiple components of surround modulation in primary visual cortex: multiple neural circuits with multiple functions?

    PubMed Central

    Nurminen, Lauri; Angelucci, Alessandra

    2014-01-01

    The responses of neurons in primary visual cortex (V1) to stimulation of their receptive field (RF) are modulated by stimuli in the RF surround. This modulation is suppressive when the stimuli in the RF and surround are of similar orientation, but less suppressive or facilitatory when they are cross-oriented. Similarly, in human vision surround stimuli selectively suppress the perceived contrast of a central stimulus. Although the properties of surround modulation have been thoroughly characterized in many species, cortical areas and sensory modalities, its role in perception remains unknown. Here we argue that surround modulation in V1 consists of multiple components having different spatio-temporal and tuning properties, generated by different neural circuits and serving different visual functions. One component arises from LGN afferents, is fast, untuned for orientation, and spatially restricted to the surround region nearest to the RF (the near-surround); its function is to normalize V1 cell responses to local contrast. Intra-V1 horizontal connections contribute a slower, narrowly orientation-tuned component to near-surround modulation, whose function is to increase the coding efficiency of natural images in manner that leads to the extraction of object boundaries. The third component is generated by topdown feedback connections to V1, is fast, broadly orientation-tuned, and extends into the far-surround; its function is to enhance the salience of behaviorally relevant visual features. Far- and near-surround modulation, thus, act as parallel mechanisms: the former quickly detects and guides saccades/attention to salient visual scene locations, the latter segments object boundaries in the scene. PMID:25204770

  19. Representation of Gravity-Aligned Scene Structure in Ventral Pathway Visual Cortex.

    PubMed

    Vaziri, Siavash; Connor, Charles E

    2016-03-21

    The ventral visual pathway in humans and non-human primates is known to represent object information, including shape and identity [1]. Here, we show the ventral pathway also represents scene structure aligned with the gravitational reference frame in which objects move and interact. We analyzed shape tuning of recently described macaque monkey ventral pathway neurons that prefer scene-like stimuli to objects [2]. Individual neurons did not respond to a single shape class, but to a variety of scene elements that are typically aligned with gravity: large planes in the orientation range of ground surfaces under natural viewing conditions, planes in the orientation range of ceilings, and extended convex and concave edges in the orientation range of wall/floor/ceiling junctions. For a given neuron, these elements tended to share a common alignment in eye-centered coordinates. Thus, each neuron integrated information about multiple gravity-aligned structures as they would be seen from a specific eye and head orientation. This eclectic coding strategy provides only ambiguous information about individual structures but explicit information about the environmental reference frame and the orientation of gravity in egocentric coordinates. In the ventral pathway, this could support perceiving and/or predicting physical events involving objects subject to gravity, recognizing object attributes like animacy based on movement not caused by gravity, and/or stabilizing perception of the world against changes in head orientation [3-5]. Our results, like the recent discovery of object weight representation [6], imply that the ventral pathway is involved not just in recognition, but also in physical understanding of objects and scenes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Theoretical investigation of confocal microscopy using an elliptically polarized cylindrical vector laser beam: Visualization of quantum emitters near interfaces

    NASA Astrophysics Data System (ADS)

    Boichenko, Stepan

    2018-04-01

    We theoretically study laser-scanning confocal fluorescence microscopy using elliptically polarized cylindrical vector excitation light as a tool for visualization of arbitrarily oriented single quantum dipole emitters located (1) near planar surfaces enhancing fluorescence, (2) in a thin supported polymer film, (3) in a freestanding polymer film, and (4) in a dielectric planar microcavity. It is shown analytically that by using a tightly focused azimuthally polarized beam, it is possible to exclude completely the orientational dependence of the image intensity maximum of a quantum emitter that absorbs light as a pair of incoherent independent linear dipoles. For linear dipole quantum emitters, the orientational independence degree higher than 0.9 can normally be achieved (this quantity equal to 1 corresponds to completely excluded orientational dependence) if the collection efficiency of the microscope objective and the emitter's total quantum yield are not strongly orientationally dependent. Thus, the visualization of arbitrarily oriented single quantum emitters by means of the studied technique can be performed quite efficiently.

  1. Crowding by a single bar: probing pattern recognition mechanisms in the visual periphery.

    PubMed

    Põder, Endel

    2014-11-06

    Whereas visual crowding does not greatly affect the detection of the presence of simple visual features, it heavily inhibits combining them into recognizable objects. Still, crowding effects have rarely been directly related to general pattern recognition mechanisms. In this study, pattern recognition mechanisms in visual periphery were probed using a single crowding feature. Observers had to identify the orientation of a rotated T presented briefly in a peripheral location. Adjacent to the target, a single bar was presented. The bar was either horizontal or vertical and located in a random direction from the target. It appears that such a crowding bar has very strong and regular effects on the identification of the target orientation. The observer's responses are determined by approximate relative positions of basic visual features; exact image-based similarity to the target is not important. A version of the "standard model" of object recognition with second-order features explains the main regularities of the data. © 2014 ARVO.

  2. Gravity in the Brain as a Reference for Space and Time Perception.

    PubMed

    Lacquaniti, Francesco; Bosco, Gianfranco; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka

    2015-01-01

    Moving and interacting with the environment require a reference for orientation and a scale for calibration in space and time. There is a wide variety of environmental clues and calibrated frames at different locales, but the reference of gravity is ubiquitous on Earth. The pull of gravity on static objects provides a plummet which, together with the horizontal plane, defines a three-dimensional Cartesian frame for visual images. On the other hand, the gravitational acceleration of falling objects can provide a time-stamp on events, because the motion duration of an object accelerated by gravity over a given path is fixed. Indeed, since ancient times, man has been using plumb bobs for spatial surveying, and water clocks or pendulum clocks for time keeping. Here we review behavioral evidence in favor of the hypothesis that the brain is endowed with mechanisms that exploit the presence of gravity to estimate the spatial orientation and the passage of time. Several visual and non-visual (vestibular, haptic, visceral) cues are merged to estimate the orientation of the visual vertical. However, the relative weight of each cue is not fixed, but depends on the specific task. Next, we show that an internal model of the effects of gravity is combined with multisensory signals to time the interception of falling objects, to time the passage through spatial landmarks during virtual navigation, to assess the duration of a gravitational motion, and to judge the naturalness of periodic motion under gravity.

  3. Visual completion from 2D cross-sections: Implications for visual theory and STEM education and practice.

    PubMed

    Gagnier, Kristin Michod; Shipley, Thomas F

    2016-01-01

    Accurately inferring three-dimensional (3D) structure from only a cross-section through that structure is not possible. However, many observers seem to be unaware of this fact. We present evidence for a 3D amodal completion process that may explain this phenomenon and provide new insights into how the perceptual system processes 3D structures. Across four experiments, observers viewed cross-sections of common objects and reported whether regions visible on the surface extended into the object. If they reported that the region extended, they were asked to indicate the orientation of extension or that the 3D shape was unknowable from the cross-section. Across Experiments 1, 2, and 3, participants frequently inferred 3D forms from surface views, showing a specific prior to report that regions in the cross-section extend straight back into the object, with little variance in orientation. In Experiment 3, we examined whether 3D visual inferences made from cross-sections are similar to other cases of amodal completion by examining how the inferences were influenced by observers' knowledge of the objects. Finally, in Experiment 4, we demonstrate that these systematic visual inferences are unlikely to result from demand characteristics or response biases. We argue that these 3D visual inferences have been largely unrecognized by the perception community, and have implications for models of 3D visual completion and science education.

  4. Teaching Object Permanence: An Action Research Study

    ERIC Educational Resources Information Center

    Bruce, Susan M.; Vargas, Claudia

    2013-01-01

    "Object permanence," also known as "object concept" in the field of visual impairment, is one of the most important early developmental milestones. The achievement of object permanence is associated with the onset of representational thought and language. Object permanence is important to orientation, including the recognition of landmarks.…

  5. A Study of the Development of Students' Visualizations of Program State during an Elementary Object-Oriented Programming Course

    ERIC Educational Resources Information Center

    Sajaniemi, Jorma; Kuittinen, Marja; Tikansalo, Taina

    2008-01-01

    Students' understanding of object-oriented (OO) program execution was studied by asking students to draw a picture of a program state at a specific moment. Students were given minimal instructions on what to include in their drawings in order to see what they considered to be central concepts and relationships in program execution. Three drawing…

  6. The relationship between visual attention and visual working memory encoding: A dissociation between covert and overt orienting.

    PubMed

    Tas, A Caglar; Luck, Steven J; Hollingworth, Andrew

    2016-08-01

    There is substantial debate over whether visual working memory (VWM) and visual attention constitute a single system for the selection of task-relevant perceptual information or whether they are distinct systems that can be dissociated when their representational demands diverge. In the present study, we focused on the relationship between visual attention and the encoding of objects into VWM. Participants performed a color change-detection task. During the retention interval, a secondary object, irrelevant to the memory task, was presented. Participants were instructed either to execute an overt shift of gaze to this object (Experiments 1-3) or to attend it covertly (Experiments 4 and 5). Our goal was to determine whether these overt and covert shifts of attention disrupted the information held in VWM. We hypothesized that saccades, which typically introduce a memorial demand to bridge perceptual disruption, would lead to automatic encoding of the secondary object. However, purely covert shifts of attention, which introduce no such demand, would not result in automatic memory encoding. The results supported these predictions. Saccades to the secondary object produced substantial interference with VWM performance, but covert shifts of attention to this object produced no interference with VWM performance. These results challenge prevailing theories that consider attention and VWM to reflect a common mechanism. In addition, they indicate that the relationship between attention and VWM is dependent on the memorial demands of the orienting behavior. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Estimated spectrum adaptive postfilter and the iterative prepost filtering algirighms

    NASA Technical Reports Server (NTRS)

    Linares, Irving (Inventor)

    2004-01-01

    The invention presents The Estimated Spectrum Adaptive Postfilter (ESAP) and the Iterative Prepost Filter (IPF) algorithms. These algorithms model a number of image-adaptive post-filtering and pre-post filtering methods. They are designed to minimize Discrete Cosine Transform (DCT) blocking distortion caused when images are highly compressed with the Joint Photographic Expert Group (JPEG) standard. The ESAP and the IPF techniques of the present invention minimize the mean square error (MSE) to improve the objective and subjective quality of low-bit-rate JPEG gray-scale images while simultaneously enhancing perceptual visual quality with respect to baseline JPEG images.

  8. Coding the presence of visual objects in a recurrent neural network of visual cortex.

    PubMed

    Zwickel, Timm; Wachtler, Thomas; Eckhorn, Reinhard

    2007-01-01

    Before we can recognize a visual object, our visual system has to segregate it from its background. This requires a fast mechanism for establishing the presence and location of objects independently of their identity. Recently, border-ownership neurons were recorded in monkey visual cortex which might be involved in this task [Zhou, H., Friedmann, H., von der Heydt, R., 2000. Coding of border ownership in monkey visual cortex. J. Neurosci. 20 (17), 6594-6611]. In order to explain the basic mechanisms required for fast coding of object presence, we have developed a neural network model of visual cortex consisting of three stages. Feed-forward and lateral connections support coding of Gestalt properties, including similarity, good continuation, and convexity. Neurons of the highest area respond to the presence of an object and encode its position, invariant of its form. Feedback connections to the lowest area facilitate orientation detectors activated by contours belonging to potential objects, and thus generate the experimentally observed border-ownership property. This feedback control acts fast and significantly improves the figure-ground segregation required for the consecutive task of object recognition.

  9. A case of complex regional pain syndrome with agnosia for object orientation.

    PubMed

    Robinson, Gail; Cohen, Helen; Goebel, Andreas

    2011-07-01

    This systematic investigation of the neurocognitive correlates of complex regional pain syndrome (CRPS) in a single case also reports agnosia for object orientation in the context of persistent CRPS. We report a patient (JW) with severe long-standing CRPS who had no difficulty identifying and naming line drawings of objects presented in 1 of 4 cardinal orientations. In contrast, he was extremely poor at reorienting these objects into the correct upright orientation and in judging whether an object was upright or not. Moreover, JW made orientation errors when copying drawings of objects, and he also showed features of mirror reversal in writing single words and reading single letters. The findings are discussed in relation to accounts of visual processing. Agnosia for object orientation is the term for impaired knowledge of an object's orientation despite good recognition and naming of the same misoriented object. This defect has previously only been reported in patients with major structural brain lesions. The neuroanatomical correlates are discussed. The patient had no structural brain lesion, raising the possibility that nonstructural reorganisation of cortical networks may be responsible for his deficits. Other patients with CRPS may have related neurocognitive defects. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.

  10. Three-quarter view preference for three-dimensional objects in 8-month-old infants.

    PubMed

    Yamashita, Wakayo; Niimi, Ryosuke; Kanazawa, So; Yamaguchi, Masami K; Yokosawa, Kazuhiko

    2014-04-04

    This study examined infants' visual perception of three-dimensional common objects. It has been reported that human adults perceive object images in a view-dependent manner: three-quarter views are often preferred to other views, and the sensitivity to object orientation is lower for three-quarter views than for other views. We tested whether such characteristics were observed in 6- to 8-month-old infants by measuring their preferential looking behavior. In Experiment 1 we examined 190- to 240-day-olds' sensitivity to orientation change and in Experiment 2 we examined these infants' preferential looking for the three-quarter view. The 240-day-old infants showed a pattern of results similar to adults for some objects, while the 190-day-old infants did not. The 240-day-old infants' perception of object view is (partly) similar to that of adults. These results suggest that human visual perception of three-dimensional objects develops at 6 to 8 months of age.

  11. A single-rate context-dependent learning process underlies rapid adaptation to familiar object dynamics.

    PubMed

    Ingram, James N; Howard, Ian S; Flanagan, J Randall; Wolpert, Daniel M

    2011-09-01

    Motor learning has been extensively studied using dynamic (force-field) perturbations. These induce movement errors that result in adaptive changes to the motor commands. Several state-space models have been developed to explain how trial-by-trial errors drive the progressive adaptation observed in such studies. These models have been applied to adaptation involving novel dynamics, which typically occurs over tens to hundreds of trials, and which appears to be mediated by a dual-rate adaptation process. In contrast, when manipulating objects with familiar dynamics, subjects adapt rapidly within a few trials. Here, we apply state-space models to familiar dynamics, asking whether adaptation is mediated by a single-rate or dual-rate process. Previously, we reported a task in which subjects rotate an object with known dynamics. By presenting the object at different visual orientations, adaptation was shown to be context-specific, with limited generalization to novel orientations. Here we show that a multiple-context state-space model, with a generalization function tuned to visual object orientation, can reproduce the time-course of adaptation and de-adaptation as well as the observed context-dependent behavior. In contrast to the dual-rate process associated with novel dynamics, we show that a single-rate process mediates adaptation to familiar object dynamics. The model predicts that during exposure to the object across multiple orientations, there will be a degree of independence for adaptation and de-adaptation within each context, and that the states associated with all contexts will slowly de-adapt during exposure in one particular context. We confirm these predictions in two new experiments. Results of the current study thus highlight similarities and differences in the processes engaged during exposure to novel versus familiar dynamics. In both cases, adaptation is mediated by multiple context-specific representations. In the case of familiar object dynamics, however, the representations can be engaged based on visual context, and are updated by a single-rate process.

  12. Registering myocardial fiber orientations with heart geometry using iterative closest points algorithms

    NASA Astrophysics Data System (ADS)

    Deng, Dongdong; Jiao, Peifeng; Shou, Guofa; Xia, Ling

    2009-10-01

    Myocardial electrical excitation propagation is anisotropic, with the most rapid spread of current along the direction of the long axis of the fiber. Fiber orientation is also an important determinant of myocardial mechanics. So myocardial fiber orientations are very important to heart modeling and simulation. Accurately construction of myocardial fiber orientations, however, is still a challenge. The purpose of this paper is to construct a heart geometrical model with myocardial fiber orientations based on CT and 3D laser scanned pictures. The iterative closest points (ICP) algorithms were used to register the fiber orientations with the heart geometry.

  13. Looking into the water with oblique head tilting: revision of the aerial binocular imaging of underwater objects.

    PubMed

    Horváth, Gábor; Buchta, Krisztián; Varjú, Dezsö

    2003-06-01

    It is a well-known phenomenon that when we look into the water with two aerial eyes, both the apparent position and the apparent shape of underwater objects are different from the real ones because of refraction at the water surface. Earlier studies of the refraction-distorted structure of the underwater binocular visual field of aerial observers were restricted to either vertically or horizontally oriented eyes. We investigate a generalized version of this problem: We calculate the position of the binocular image point of an underwater object point viewed by two arbitrarily positioned aerial eyes, including oblique orientations of the eyes relative to the flat water surface. Assuming that binocular image fusion is performed by appropriate vergent eye movements to bring the object's image onto the foveas, the structure of the underwater binocular visual field is computed and visualized in different ways as a function of the relative positions of the eyes. We show that a revision of certain earlier treatments of the aerial imaging of underwater objects is necessary. We analyze and correct some widespread erroneous or incomplete representations of this classical geometric optical problem that occur in different textbooks. Improving the theory of aerial binocular imaging of underwater objects, we demonstrate that the structure of the underwater binocular visual field of aerial observers distorted by refraction is more complex than has been thought previously.

  14. Role of orientation reference selection in motion sickness, supplement 2S

    NASA Technical Reports Server (NTRS)

    Peterka, Robert J.; Black, F. Owen

    1987-01-01

    Previous experiments with moving platform posturography have shown that different people have varying abilities to resolve conflicts among vestibular, visual, and proprioceptive sensory signals. The conceptual basis of the present proposal hinges on the similarities between the space motion sickness problem and the sensory orientation reference selection problems associated with benign paroxysmal positional vertigo (BPPV) syndrome. These similarities include both etiology related to abnormal vertical canal-otolith function, and motion sickness initiating events provoked by pitch and roll head movements. The objectives are to explore and quantify the orientation reference selection abilities of subjects and the relation of this selection to motion sickness in humans. The overall objectives are to determine: if motion sickness susceptibility is related to sensory orientation reference selection abilities of subjects; if abnormal vertical canal-otolith function is the source of abnormal posture control strategies and if it can be quantified by vestibular and oculomotor reflex measurements, and if it can be quantified by vestibular and oculomotor reflex measurements; and quantifiable measures of perception of vestibular and visual motion cues can be related to motion sickness susceptibility and to orientation reference selection ability.

  15. Feature-based and object-based attention orientation during short-term memory maintenance.

    PubMed

    Ku, Yixuan

    2015-12-01

    Top-down attention biases the short-term memory (STM) processing at multiple stages. Orienting attention during the maintenance period of STM by a retrospective cue (retro-cue) strengthens the representation of the cued item and improves the subsequent STM performance. In a recent article, Backer et al. (Backer KC, Binns MA, Alain C. J Neurosci 35: 1307-1318, 2015) extended these findings from the visual to the auditory domain and combined electroencephalography to dissociate neural mechanisms underlying feature-based and object-based attention orientation. Both event-related potentials and neural oscillations explained the behavioral benefits of retro-cues and favored the theory that feature-based and object-based attention orientation were independent. Copyright © 2015 the American Physiological Society.

  16. Role of orientation reference selection in motion sickness

    NASA Technical Reports Server (NTRS)

    Peterka, Robert J.; Black, F. Owen

    1987-01-01

    The objectives of this proposal were developed to further explore and quantify the orientation reference selection abilities of subjects and the relation, if any, between motion sickness and orientation reference selection. The overall objectives of this proposal are to determine (1) if motion sickness susceptibility is related to sensory orientation reference selection abilities of subjects, (2) if abnormal vertical canal-otolith function is the source of these abnormal posture control strategies and if it can be quantified by vestibular and oculomotor reflex measurements, and (3) if quantifiable measures of perception of vestibular and visual motion cues can be related to motion sickness susceptibility and to orientation reference selection ability demonstrated by tests which systematically control the sensory imformation available for orientation.

  17. An adaptive, object oriented strategy for base calling in DNA sequence analysis.

    PubMed Central

    Giddings, M C; Brumley, R L; Haker, M; Smith, L M

    1993-01-01

    An algorithm has been developed for the determination of nucleotide sequence from data produced in fluorescence-based automated DNA sequencing instruments employing the four-color strategy. This algorithm takes advantage of object oriented programming techniques for modularity and extensibility. The algorithm is adaptive in that data sets from a wide variety of instruments and sequencing conditions can be used with good results. Confidence values are provided on the base calls as an estimate of accuracy. The algorithm iteratively employs confidence determinations from several different modules, each of which examines a different feature of the data for accurate peak identification. Modules within this system can be added or removed for increased performance or for application to a different task. In comparisons with commercial software, the algorithm performed well. Images PMID:8233787

  18. Top-down contextual knowledge guides visual attention in infancy.

    PubMed

    Tummeltshammer, Kristen; Amso, Dima

    2017-10-26

    The visual context in which an object or face resides can provide useful top-down information for guiding attention orienting, object recognition, and visual search. Although infants have demonstrated sensitivity to covariation in spatial arrays, it is presently unclear whether they can use rapidly acquired contextual knowledge to guide attention during visual search. In this eye-tracking experiment, 6- and 10-month-old infants searched for a target face hidden among colorful distracter shapes. Targets appeared in Old or New visual contexts, depending on whether the visual search arrays (defined by the spatial configuration, shape and color of component items in the search display) were repeated or newly generated throughout the experiment. Targets in Old contexts appeared in the same location within the same configuration, such that context covaried with target location. Both 6- and 10-month-olds successfully distinguished between Old and New contexts, exhibiting faster search times, fewer looks at distracters, and more anticipation of targets when contexts repeated. This initial demonstration of contextual cueing effects in infants indicates that they can use top-down information to facilitate orienting during memory-guided visual search. © 2017 John Wiley & Sons Ltd.

  19. Separate Capacities for Storing Different Features in Visual Working Memory

    ERIC Educational Resources Information Center

    Wang, Benchi; Cao, Xiaohua; Theeuwes, Jan; Olivers, Christian N. L.; Wang, Zhiguo

    2017-01-01

    Recent empirical and theoretical work suggests that visual features such as color and orientation can be stored or retrieved independently in visual working memory (VWM), even in cases when they belong to the same object. Yet it remains unclear whether different feature dimensions have their own capacity limits, or whether they compete for shared…

  20. Value associations of irrelevant stimuli modify rapid visual orienting.

    PubMed

    Rutherford, Helena J V; O'Brien, Jennifer L; Raymond, Jane E

    2010-08-01

    In familiar environments, goal-directed visual behavior is often performed in the presence of objects with strong, but task-irrelevant, reward or punishment associations that are acquired through prior, unrelated experience. In a two-phase experiment, we asked whether such stimuli could affect speeded visual orienting in a classic visual orienting paradigm. First, participants learned to associate faces with monetary gains, losses, or no outcomes. These faces then served as brief, peripheral, uninformative cues in an explicitly unrewarded, unpunished, speeded, target localization task. Cues preceded targets by either 100 or 1,500 msec and appeared at either the same or a different location. Regardless of interval, reward-associated cues slowed responding at cued locations, as compared with equally familiar punishment-associated or no-value cues, and had no effect when targets were presented at uncued locations. This localized effect of reward-associated cues is consistent with adaptive models of inhibition of return and suggests rapid, low-level effects of motivation on visual processing.

  1. Estimation of 3D shape from image orientations.

    PubMed

    Fleming, Roland W; Holtmann-Rice, Daniel; Bülthoff, Heinrich H

    2011-12-20

    One of the main functions of vision is to estimate the 3D shape of objects in our environment. Many different visual cues, such as stereopsis, motion parallax, and shading, are thought to be involved. One important cue that remains poorly understood comes from surface texture markings. When a textured surface is slanted in 3D relative to the observer, the surface patterns appear compressed in the retinal image, providing potentially important information about 3D shape. What is not known, however, is how the brain actually measures this information from the retinal image. Here, we explain how the key information could be extracted by populations of cells tuned to different orientations and spatial frequencies, like those found in the primary visual cortex. To test this theory, we created stimuli that selectively stimulate such cell populations, by "smearing" (filtering) images of 2D random noise into specific oriented patterns. We find that the resulting patterns appear vividly 3D, and that increasing the strength of the orientation signals progressively increases the sense of 3D shape, even though the filtering we apply is physically inconsistent with what would occur with a real object. This finding suggests we have isolated key mechanisms used by the brain to estimate shape from texture. Crucially, we also find that adapting the visual system's orientation detectors to orthogonal patterns causes unoriented random noise to look like a specific 3D shape. Together these findings demonstrate a crucial role of orientation detectors in the perception of 3D shape.

  2. Pregnenolone sulphate enhances spatial orientation and object discrimination in adult male rats: evidence from a behavioural and electrophysiological study.

    PubMed

    Plescia, Fulvio; Sardo, Pierangelo; Rizzo, Valerio; Cacace, Silvana; Marino, Rosa Anna Maria; Brancato, Anna; Ferraro, Giuseppe; Carletti, Fabio; Cannizzaro, Carla

    2014-01-01

    Neurosteroids can alter neuronal excitability interacting with specific neurotransmitter receptors, thus affecting several functions such as cognition and emotionality. In this study we investigated, in adult male rats, the effects of the acute administration of pregnenolone-sulfate (PREGS) (10mg/kg, s.c.) on cognitive processes using the Can test, a non aversive spatial/visual task which allows the assessment of both spatial orientation-acquisition and object discrimination in a simple and in a complex version of the visual task. Electrophysiological recordings were also performed in vivo, after acute PREGS systemic administration in order to investigate on the neuronal activation in the hippocampus and the perirhinal cortex. Our results indicate that, PREGS induces an improvement in spatial orientation-acquisition and in object discrimination in the simple and in the complex visual task; the behavioural responses were also confirmed by electrophysiological recordings showing a potentiation in the neuronal activity of the hippocampus and the perirhinal cortex. In conclusion, this study demonstrates that PREGS systemic administration in rats exerts cognitive enhancing properties which involve both the acquisition and utilization of spatial information, and object discrimination memory, and also correlates the behavioural potentiation observed to an increase in the neuronal firing of discrete cerebral areas critical for spatial learning and object recognition. This provides further evidence in support of the role of PREGS in exerting a protective and enhancing role on human memory. Copyright © 2013. Published by Elsevier B.V.

  3. Image processing strategies based on saliency segmentation for object recognition under simulated prosthetic vision.

    PubMed

    Li, Heng; Su, Xiaofan; Wang, Jing; Kan, Han; Han, Tingting; Zeng, Yajie; Chai, Xinyu

    2018-01-01

    Current retinal prostheses can only generate low-resolution visual percepts constituted of limited phosphenes which are elicited by an electrode array and with uncontrollable color and restricted grayscale. Under this visual perception, prosthetic recipients can just complete some simple visual tasks, but more complex tasks like face identification/object recognition are extremely difficult. Therefore, it is necessary to investigate and apply image processing strategies for optimizing the visual perception of the recipients. This study focuses on recognition of the object of interest employing simulated prosthetic vision. We used a saliency segmentation method based on a biologically plausible graph-based visual saliency model and a grabCut-based self-adaptive-iterative optimization framework to automatically extract foreground objects. Based on this, two image processing strategies, Addition of Separate Pixelization and Background Pixel Shrink, were further utilized to enhance the extracted foreground objects. i) The results showed by verification of psychophysical experiments that under simulated prosthetic vision, both strategies had marked advantages over Direct Pixelization in terms of recognition accuracy and efficiency. ii) We also found that recognition performance under two strategies was tied to the segmentation results and was affected positively by the paired-interrelated objects in the scene. The use of the saliency segmentation method and image processing strategies can automatically extract and enhance foreground objects, and significantly improve object recognition performance towards recipients implanted a high-density implant. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Visual orientation by the crown-of-thorns starfish ( Acanthaster planci)

    NASA Astrophysics Data System (ADS)

    Petie, Ronald; Hall, Michael R.; Hyldahl, Mia; Garm, Anders

    2016-12-01

    Photoreception in echinoderms has been known for over 200 years, but their visual capabilities remain poorly understood. As has been reported for some asteroids, the crown-of-thorns starfish ( Acanthaster planci) possess a seemingly advanced eye at the tip of each of its 7-23 arms. With such an array of eyes, the starfish can integrate a wide field of view of its surroundings. We hypothesise that, at close range, orientation and directional movements of the crown-of-thorns starfish are visually guided. In this study, the eyes and vision of A. planci were examined by means of light microscopy, electron microscopy, underwater goniometry, electroretinograms and behavioural experiments in the animals' natural habitat. We found that only animals with intact vision could orient to a nearby coral reef, whereas blinded animals, with olfaction intact, walked in random directions. The eye had peak sensitivity in the blue part (470 nm) of the visual spectrum and a narrow, horizontal visual field of approximately 100° wide and 30° high. With approximately 250 ommatidia in each adult compound eye and average interommatidial angles of 8°, crown-of-thorns starfish have the highest spatial resolution of any starfish studied to date. In addition, they have the slowest vision of all animals examined thus far, with a flicker fusion frequency of only 0.6-0.7 Hz. This may be adaptive as fast vision is not required for the detection of stationary objects such as reefs. In short, the eyes seem optimised for detecting large, dark, stationary objects contrasted against an ocean blue background. Our results show that the visual sense of the crown-of-thorns starfish is much more elaborate than has been thus far appreciated and is essential for orientation and localisation of suitable habitats.

  5. Electrical Capacitance Tomography Measurement of the Migration of Ice Frontal Surface in Freezing Soil

    NASA Astrophysics Data System (ADS)

    Liu, J.; Suo, X. M.; Zhou, S. S.; Meng, S. Q.; Chen, S. S.; Mu, H. P.

    2016-12-01

    The tracking of the migration of ice frontal surface is crucial for the understanding of the underlying physical mechanisms in freezing soil. Owing to the distinct advantages, including non-invasive sensing, high safety, low cost and high data acquisition speed, the electrical capacitance tomography (ECT) is considered to be a promising visualization measurement method. In this paper, the ECT method is used to visualize the migration of ice frontal surface in freezing soil. With the main motivation of the improvement of imaging quality, a loss function with multiple regularizers that incorporate the prior formation related to the imaging objects is proposed to cast the ECT image reconstruction task into an optimization problem. An iteration scheme that integrates the superiority of the split Bregman iteration (SBI) method is developed for searching for the optimal solution of the proposed loss function. An unclosed electrodes sensor is designed for satisfying the requirements of practical measurements. An experimental system of one dimensional freezing in frozen soil is constructed, and the ice frontal surface migration in the freezing process of the wet soil sample containing five percent of moisture is measured. The visualization measurement results validate the feasibility and effectiveness of the ECT visualization method

  6. Supralinear and Supramodal Integration of Visual and Tactile Signals in Rats: Psychophysics and Neuronal Mechanisms.

    PubMed

    Nikbakht, Nader; Tafreshiha, Azadeh; Zoccolan, Davide; Diamond, Mathew E

    2018-02-07

    To better understand how object recognition can be triggered independently of the sensory channel through which information is acquired, we devised a task in which rats judged the orientation of a raised, black and white grating. They learned to recognize two categories of orientation: 0° ± 45° ("horizontal") and 90° ± 45° ("vertical"). Each trial required a visual (V), a tactile (T), or a visual-tactile (VT) discrimination; VT performance was better than that predicted by optimal linear combination of V and T signals, indicating synergy between sensory channels. We examined posterior parietal cortex (PPC) and uncovered key neuronal correlates of the behavioral findings: PPC carried both graded information about object orientation and categorical information about the rat's upcoming choice; single neurons exhibited identical responses under the three modality conditions. Finally, a linear classifier of neuronal population firing replicated the behavioral findings. Taken together, these findings suggest that PPC is involved in the supramodal processing of shape. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Spatial and Feature-Based Attention in a Layered Cortical Microcircuit Model

    PubMed Central

    Wagatsuma, Nobuhiko; Potjans, Tobias C.; Diesmann, Markus; Sakai, Ko; Fukai, Tomoki

    2013-01-01

    Directing attention to the spatial location or the distinguishing feature of a visual object modulates neuronal responses in the visual cortex and the stimulus discriminability of subjects. However, the spatial and feature-based modes of attention differently influence visual processing by changing the tuning properties of neurons. Intriguingly, neurons' tuning curves are modulated similarly across different visual areas under both these modes of attention. Here, we explored the mechanism underlying the effects of these two modes of visual attention on the orientation selectivity of visual cortical neurons. To do this, we developed a layered microcircuit model. This model describes multiple orientation-specific microcircuits sharing their receptive fields and consisting of layers 2/3, 4, 5, and 6. These microcircuits represent a functional grouping of cortical neurons and mutually interact via lateral inhibition and excitatory connections between groups with similar selectivity. The individual microcircuits receive bottom-up visual stimuli and top-down attention in different layers. A crucial assumption of the model is that feature-based attention activates orientation-specific microcircuits for the relevant feature selectively, whereas spatial attention activates all microcircuits homogeneously, irrespective of their orientation selectivity. Consequently, our model simultaneously accounts for the multiplicative scaling of neuronal responses in spatial attention and the additive modulations of orientation tuning curves in feature-based attention, which have been observed widely in various visual cortical areas. Simulations of the model predict contrasting differences between excitatory and inhibitory neurons in the two modes of attentional modulations. Furthermore, the model replicates the modulation of the psychophysical discriminability of visual stimuli in the presence of external noise. Our layered model with a biologically suggested laminar structure describes the basic circuit mechanism underlying the attention-mode specific modulations of neuronal responses and visual perception. PMID:24324628

  8. Microsoft Repository Version 2 and the Open Information Model.

    ERIC Educational Resources Information Center

    Bernstein, Philip A.; Bergstraesser, Thomas; Carlson, Jason; Pal, Shankar; Sanders, Paul; Shutt, David

    1999-01-01

    Describes the programming interface and implementation of the repository engine and the Open Information Model for Microsoft Repository, an object-oriented meta-data management facility that ships in Microsoft Visual Studio and Microsoft SQL Server. Discusses Microsoft's component object model, object manipulation, queries, and information…

  9. Visual tracking using neuromorphic asynchronous event-based cameras.

    PubMed

    Ni, Zhenjiang; Ieng, Sio-Hoi; Posch, Christoph; Régnier, Stéphane; Benosman, Ryad

    2015-04-01

    This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly.

  10. An insect-inspired model for visual binding II: functional analysis and visual attention.

    PubMed

    Northcutt, Brandon D; Higgins, Charles M

    2017-04-01

    We have developed a neural network model capable of performing visual binding inspired by neuronal circuitry in the optic glomeruli of flies: a brain area that lies just downstream of the optic lobes where early visual processing is performed. This visual binding model is able to detect objects in dynamic image sequences and bind together their respective characteristic visual features-such as color, motion, and orientation-by taking advantage of their common temporal fluctuations. Visual binding is represented in the form of an inhibitory weight matrix which learns over time which features originate from a given visual object. In the present work, we show that information represented implicitly in this weight matrix can be used to explicitly count the number of objects present in the visual image, to enumerate their specific visual characteristics, and even to create an enhanced image in which one particular object is emphasized over others, thus implementing a simple form of visual attention. Further, we present a detailed analysis which reveals the function and theoretical limitations of the visual binding network and in this context describe a novel network learning rule which is optimized for visual binding.

  11. Overview of EVE - the event visualization environment of ROOT

    NASA Astrophysics Data System (ADS)

    Tadel, Matevž

    2010-04-01

    EVE is a high-level visualization library using ROOT's data-processing, GUI and OpenGL interfaces. It is designed as a framework for object management offering hierarchical data organization, object interaction and visualization via GUI and OpenGL representations. Automatic creation of 2D projected views is also supported. On the other hand, it can serve as an event visualization toolkit satisfying most HEP requirements: visualization of geometry, simulated and reconstructed data such as hits, clusters, tracks and calorimeter information. Special classes are available for visualization of raw-data. Object-interaction layer allows for easy selection and highlighting of objects and their derived representations (projections) across several views (3D, Rho-Z, R-Phi). Object-specific tooltips are provided in both GUI and GL views. The visual-configuration layer of EVE is built around a data-base of template objects that can be applied to specific instances of visualization objects to ensure consistent object presentation. The data-base can be retrieved from a file, edited during the framework operation and stored to file. EVE prototype was developed within the ALICE collaboration and has been included into ROOT in December 2007. Since then all EVE components have reached maturity. EVE is used as the base of AliEve visualization framework in ALICE, Firework physics-oriented event-display in CMS, and as the visualization engine of FairRoot in FAIR.

  12. An object oriented fully 3D tomography visual toolkit.

    PubMed

    Agostinelli, S; Paoli, G

    2001-04-01

    In this paper we present a modern object oriented component object model (COMM) C + + toolkit dedicated to fully 3D cone-beam tomography. The toolkit allows the display and visual manipulation of analytical phantoms, projection sets and volumetric data through a standard Windows graphical user interface. Data input/output is performed using proprietary file formats but import/export of industry standard file formats, including raw binary, Windows bitmap and AVI, ACR/NEMA DICOMM 3 and NCSA HDF is available. At the time of writing built-in implemented data manipulators include a basic phantom ray-tracer and a Matrox Genesis frame grabbing facility. A COMM plug-in interface is provided for user-defined custom backprojector algorithms: a simple Feldkamp ActiveX control, including source code, is provided as an example; our fast Feldkamp plug-in is also available.

  13. Three-Dimensional Geometry of Collagenous Tissues by Second Harmonic Polarimetry.

    PubMed

    Reiser, Karen; Stoller, Patrick; Knoesen, André

    2017-06-01

    Collagen is a biological macromolecule capable of second harmonic generation, allowing label-free detection in tissues; in addition, molecular orientation can be determined from the polarization dependence of the second harmonic signal. Previously we reported that in-plane orientation of collagen fibrils could be determined by modulating the polarization angle of the laser during scanning. We have now extended this method so that out-of-plane orientation angles can be determined at the same time, allowing visualization of the 3-dimensional structure of collagenous tissues. This approach offers advantages compared with other methods for determining out-of-plane orientation. First, the orientation angles are directly calculated from the polarimetry data obtained in a single scan, while other reported methods require data from multiple scans, use of iterative optimization methods, application of fitting algorithms, or extensive post-optical processing. Second, our method does not require highly specialized instrumentation, and thus can be adapted for use in almost any nonlinear optical microscopy setup. It is suitable for both basic and clinical applications. We present three-dimensional images of structurally complex collagenous tissues that illustrate the power of such 3-dimensional analyses to reveal the architecture of biological structures.

  14. Three-Dimensional Geometry of Collagenous Tissues by Second Harmonic Polarimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiser, Karen; Stoller, Patrick; Knoesen, André

    Collagen is a biological macromolecule capable of second harmonic generation, allowing label-free detection in tissues; in addition, molecular orientation can be determined from the polarization dependence of the second harmonic signal. Previously we reported that in-plane orientation of collagen fibrils could be determined by modulating the polarization angle of the laser during scanning. We have now extended this method so that out-of-plane orientation angles can be determined at the same time, allowing visualization of the 3-dimensional structure of collagenous tissues. This approach offers advantages compared with other methods for determining out-of-plane orientation. First, the orientation angles are directly calculated frommore » the polarimetry data obtained in a single scan, while other reported methods require data from multiple scans, use of iterative optimization methods, application of fitting algorithms, or extensive post-optical processing. Second, our method does not require highly specialized instrumentation, and thus can be adapted for use in almost any nonlinear optical microscopy setup. It is suitable for both basic and clinical applications. We present three-dimensional images of structurally complex collagenous tissues that illustrate the power of such 3-dimensional analyses to reveal the architecture of biological structures.« less

  15. Three-Dimensional Geometry of Collagenous Tissues by Second Harmonic Polarimetry

    DOE PAGES

    Reiser, Karen; Stoller, Patrick; Knoesen, André

    2017-06-01

    Collagen is a biological macromolecule capable of second harmonic generation, allowing label-free detection in tissues; in addition, molecular orientation can be determined from the polarization dependence of the second harmonic signal. Previously we reported that in-plane orientation of collagen fibrils could be determined by modulating the polarization angle of the laser during scanning. We have now extended this method so that out-of-plane orientation angles can be determined at the same time, allowing visualization of the 3-dimensional structure of collagenous tissues. This approach offers advantages compared with other methods for determining out-of-plane orientation. First, the orientation angles are directly calculated frommore » the polarimetry data obtained in a single scan, while other reported methods require data from multiple scans, use of iterative optimization methods, application of fitting algorithms, or extensive post-optical processing. Second, our method does not require highly specialized instrumentation, and thus can be adapted for use in almost any nonlinear optical microscopy setup. It is suitable for both basic and clinical applications. We present three-dimensional images of structurally complex collagenous tissues that illustrate the power of such 3-dimensional analyses to reveal the architecture of biological structures.« less

  16. A stopping criterion to halt iterations at the Richardson-Lucy deconvolution of radiographic images

    NASA Astrophysics Data System (ADS)

    Almeida, G. L.; Silvani, M. I.; Souza, E. S.; Lopes, R. T.

    2015-07-01

    Radiographic images, as any experimentally acquired ones, are affected by spoiling agents which degrade their final quality. The degradation caused by agents of systematic character, can be reduced by some kind of treatment such as an iterative deconvolution. This approach requires two parameters, namely the system resolution and the best number of iterations in order to achieve the best final image. This work proposes a novel procedure to estimate the best number of iterations, which replaces the cumbersome visual inspection by a comparison of numbers. These numbers are deduced from the image histograms, taking into account the global difference G between them for two subsequent iterations. The developed algorithm, including a Richardson-Lucy deconvolution procedure has been embodied into a Fortran program capable to plot the 1st derivative of G as the processing progresses and to stop it automatically when this derivative - within the data dispersion - reaches zero. The radiograph of a specially chosen object acquired with thermal neutrons from the Argonauta research reactor at Institutode Engenharia Nuclear - CNEN, Rio de Janeiro, Brazil, have undergone this treatment with fair results.

  17. Task-set inertia and memory-consolidation bottleneck in dual tasks.

    PubMed

    Koch, Iring; Rumiati, Raffaella I

    2006-11-01

    Three dual-task experiments examined the influence of processing a briefly presented visual object for deferred verbal report on performance in an unrelated auditory-manual reaction time (RT) task. RT was increased at short stimulus-onset asynchronies (SOAs) relative to long SOAs, showing that memory consolidation processes can produce a functional processing bottleneck in dual-task performance. In addition, the experiments manipulated the spatial compatibility of the orientation of the visual object and the side of the speeded manual response. This cross-task compatibility produced relative RT benefits only when the instruction for the visual task emphasized overlap at the level of response codes across the task sets (Experiment 1). However, once the effective task set was in place, it continued to produce cross-task compatibility effects even in single-task situations ("ignore" trials in Experiment 2) and when instructions for the visual task did not explicitly require spatial coding of object orientation (Experiment 3). Taken together, the data suggest a considerable degree of task-set inertia in dual-task performance, which is also reinforced by finding costs of switching task sequences (e.g., AC --> BC vs. BC --> BC) in Experiment 3.

  18. Obligatory encoding of task-irrelevant features depletes working memory resources.

    PubMed

    Marshall, Louise; Bays, Paul M

    2013-02-18

    Selective attention is often considered the "gateway" to visual working memory (VWM). However, the extent to which we can voluntarily control which of an object's features enter memory remains subject to debate. Recent research has converged on the concept of VWM as a limited commodity distributed between elements of a visual scene. Consequently, as memory load increases, the fidelity with which each visual feature is stored decreases. Here we used changes in recall precision to probe whether task-irrelevant features were encoded into VWM when individuals were asked to store specific feature dimensions. Recall precision for both color and orientation was significantly enhanced when task-irrelevant features were removed, but knowledge of which features would be probed provided no advantage over having to memorize both features of all items. Next, we assessed the effect an interpolated orientation-or color-matching task had on the resolution with which orientations in a memory array were stored. We found that the presence of orientation information in the second array disrupted memory of the first array. The cost to recall precision was identical whether the interfering features had to be remembered, attended to, or could be ignored. Therefore, it appears that storing, or merely attending to, one feature of an object is sufficient to promote automatic encoding of all its features, depleting VWM resources. However, the precision cost was abolished when the match task preceded the memory array. So, while encoding is automatic, maintenance is voluntary, allowing resources to be reallocated to store new visual information.

  19. Object-oriented programming for the biosciences.

    PubMed

    Wiechert, W; Joksch, B; Wittig, R; Hartbrich, A; Höner, T; Möllney, M

    1995-10-01

    The development of software systems for the biosciences is always closely connected to experimental practice. Programs must be able to handle the inherent complexity and heterogeneous structure of biological systems in combination with the measuring equipment. Moreover, a high degree of flexibility is required to treat rapidly changing experimental conditions. Object-oriented methodology seems to be well suited for this purpose. It enables an evolutionary approach to software development that still maintains a high degree of modularity. This paper presents experience with object-oriented technology gathered during several years of programming in the fields of bioprocess development and metabolic engineering. It concentrates on the aspects of experimental support, data analysis, interaction and visualization. Several examples are presented and discussed in the general context of the experimental cycle of knowledge acquisition, thus pointing out the benefits and problems of object-oriented technology in the specific application field of the biosciences. Finally, some strategies for future development are described.

  20. Temporal resolution of orientation-defined texture segregation: a VEP study.

    PubMed

    Lachapelle, Julie; McKerral, Michelle; Jauffret, Colin; Bach, Michael

    2008-09-01

    Orientation is one of the visual dimensions that subserve figure-ground discrimination. A spatial gradient in orientation leads to "texture segregation", which is thought to be concurrent parallel processing across the visual field, without scanning. In the visual-evoked potential (VEP) a component can be isolated which is related to texture segregation ("tsVEP"). Our objective was to evaluate the temporal frequency dependence of the tsVEP to compare processing speed of low-level features (e.g., orientation, using the VEP, here denoted llVEP) with texture segregation because of a recent literature controversy in that regard. Visual-evoked potentials (VEPs) were recorded in seven normal adults. Oriented line segments of 0.1 degrees x 0.8 degrees at 100% contrast were presented in four different arrangements: either oriented in parallel for two homogeneous stimuli (from which were obtained the low-level VEP (llVEP)) or with a 90 degrees orientation gradient for two textured ones (from which were obtained the texture VEP). The orientation texture condition was presented at eight different temporal frequencies ranging from 7.5 to 45 Hz. Fourier analysis was used to isolate low-level components at the pattern-change frequency and texture-segregation components at half that frequency. For all subjects, there was lower high-cutoff frequency for tsVEP than for llVEPs, on average 12 Hz vs. 17 Hz (P = 0.017). The results suggest that the processing of feature gradients to extract texture segregation requires additional processing time, resulting in a lower fusion frequency.

  1. Coordinate Transformations in Object Recognition

    ERIC Educational Resources Information Center

    Graf, Markus

    2006-01-01

    A basic problem of visual perception is how human beings recognize objects after spatial transformations. Three central classes of findings have to be accounted for: (a) Recognition performance varies systematically with orientation, size, and position; (b) recognition latencies are sequentially additive, suggesting analogue transformation…

  2. The Effects of Context and Attention on Spiking Activity in Human Early Visual Cortex.

    PubMed

    Self, Matthew W; Peters, Judith C; Possel, Jessy K; Reithler, Joel; Goebel, Rainer; Ris, Peterjan; Jeurissen, Danique; Reddy, Leila; Claus, Steven; Baayen, Johannes C; Roelfsema, Pieter R

    2016-03-01

    Here we report the first quantitative analysis of spiking activity in human early visual cortex. We recorded multi-unit activity from two electrodes in area V2/V3 of a human patient implanted with depth electrodes as part of her treatment for epilepsy. We observed well-localized multi-unit receptive fields with tunings for contrast, orientation, spatial frequency, and size, similar to those reported in the macaque. We also observed pronounced gamma oscillations in the local-field potential that could be used to estimate the underlying spiking response properties. Spiking responses were modulated by visual context and attention. We observed orientation-tuned surround suppression: responses were suppressed by image regions with a uniform orientation and enhanced by orientation contrast. Additionally, responses were enhanced on regions that perceptually segregated from the background, indicating that neurons in the human visual cortex are sensitive to figure-ground structure. Spiking responses were also modulated by object-based attention. When the patient mentally traced a curve through the neurons' receptive fields, the accompanying shift of attention enhanced neuronal activity. These results demonstrate that the tuning properties of cells in the human early visual cortex are similar to those in the macaque and that responses can be modulated by both contextual factors and behavioral relevance. Our results, therefore, imply that the macaque visual system is an excellent model for the human visual cortex.

  3. The Effects of Context and Attention on Spiking Activity in Human Early Visual Cortex

    PubMed Central

    Reithler, Joel; Goebel, Rainer; Ris, Peterjan; Jeurissen, Danique; Reddy, Leila; Claus, Steven; Baayen, Johannes C.; Roelfsema, Pieter R.

    2016-01-01

    Here we report the first quantitative analysis of spiking activity in human early visual cortex. We recorded multi-unit activity from two electrodes in area V2/V3 of a human patient implanted with depth electrodes as part of her treatment for epilepsy. We observed well-localized multi-unit receptive fields with tunings for contrast, orientation, spatial frequency, and size, similar to those reported in the macaque. We also observed pronounced gamma oscillations in the local-field potential that could be used to estimate the underlying spiking response properties. Spiking responses were modulated by visual context and attention. We observed orientation-tuned surround suppression: responses were suppressed by image regions with a uniform orientation and enhanced by orientation contrast. Additionally, responses were enhanced on regions that perceptually segregated from the background, indicating that neurons in the human visual cortex are sensitive to figure-ground structure. Spiking responses were also modulated by object-based attention. When the patient mentally traced a curve through the neurons’ receptive fields, the accompanying shift of attention enhanced neuronal activity. These results demonstrate that the tuning properties of cells in the human early visual cortex are similar to those in the macaque and that responses can be modulated by both contextual factors and behavioral relevance. Our results, therefore, imply that the macaque visual system is an excellent model for the human visual cortex. PMID:27015604

  4. SoundView: an auditory guidance system based on environment understanding for the visually impaired people.

    PubMed

    Nie, Min; Ren, Jie; Li, Zhengjun; Niu, Jinhai; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao

    2009-01-01

    Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.

  5. Numerical optimization of actuator trajectories for ITER hybrid scenario profile evolution

    NASA Astrophysics Data System (ADS)

    van Dongen, J.; Felici, F.; Hogeweij, G. M. D.; Geelen, P.; Maljaars, E.

    2014-12-01

    Optimal actuator trajectories for an ITER hybrid scenario ramp-up are computed using a numerical optimization method. For both L-mode and H-mode scenarios, the time trajectory of plasma current, EC heating and current drive distribution is determined that minimizes a chosen cost function, while satisfying constraints. The cost function is formulated to reflect two desired properties of the plasma q profile at the end of the ramp-up. The first objective is to maximize the ITG turbulence threshold by maximizing the volume-averaged s/q ratio. The second objective is to achieve a stationary q profile by having a flat loop voltage profile. Actuator and physics-derived constraints are included, imposing limits on plasma current, ramp rates, internal inductance and q profile. This numerical method uses the fast control-oriented plasma profile evolution code RAPTOR, which is successfully benchmarked against more complete CRONOS simulations for L-mode and H-mode mode ITER hybrid scenarios. It is shown that the optimized trajectories computed using RAPTOR also result in an improved ramp-up scenario for CRONOS simulations using the same input trajectories. Furthermore, the optimal trajectories are shown to vary depending on the precise timing of the L-H transition.

  6. JVIEW Visualization for Virtual Airspace Modeling and Simulation

    DTIC Science & Technology

    2009-04-01

    23  4.2.2  Translucency ................................................................................................................. 25  4.3... Translucency Used to Display Multiple Visualization Elements .............................. 26  Figure 26 - Textual Labels Feature...been done by Jason Moore and other AFRL/RISF staff and support personnel developing the JView API. JView relies on concrete Object Oriented Design

  7. AztecOO user guide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heroux, Michael Allen

    2004-07-01

    The Trilinos{trademark} Project is an effort to facilitate the design, development, integration and ongoing support of mathematical software libraries. AztecOO{trademark} is a package within Trilinos that enables the use of the Aztec solver library [19] with Epetra{trademark} [13] objects. AztecOO provides access to Aztec preconditioners and solvers by implementing the Aztec 'matrix-free' interface using Epetra. While Aztec is written in C and procedure-oriented, AztecOO is written in C++ and is object-oriented. In addition to providing access to Aztec capabilities, AztecOO also provides some signficant new functionality. In particular it provides an extensible status testing capability that allows expression of sophisticatedmore » stopping criteria as is needed in production use of iterative solvers. AztecOO also provides mechanisms for using Ifpack [2], ML [20] and AztecOO itself as preconditioners.« less

  8. The Role of Visual Cues in Microgravity Spatial Orientation

    NASA Technical Reports Server (NTRS)

    Oman, Charles M.; Howard, Ian P.; Smith, Theodore; Beall, Andrew C.; Natapoff, Alan; Zacher, James E.; Jenkin, Heather L.

    2003-01-01

    In weightlessness, astronauts must rely on vision to remain spatially oriented. Although gravitational down cues are missing, most astronauts maintain a subjective vertical -a subjective sense of which way is up. This is evidenced by anecdotal reports of crewmembers feeling upside down (inversion illusions) or feeling that a floor has become a ceiling and vice versa (visual reorientation illusions). Instability in the subjective vertical direction can trigger disorientation and space motion sickness. On Neurolab, a virtual environment display system was used to conduct five interrelated experiments, which quantified: (a) how the direction of each person's subjective vertical depends on the orientation of the surrounding visual environment, (b) whether rolling the virtual visual environment produces stronger illusions of circular self-motion (circular vection) and more visual reorientation illusions than on Earth, (c) whether a virtual scene moving past the subject produces a stronger linear self-motion illusion (linear vection), and (d) whether deliberate manipulation of the subjective vertical changes a crewmember's interpretation of shading or the ability to recognize objects. None of the crew's subjective vertical indications became more independent of environmental cues in weightlessness. Three who were either strongly dependent on or independent of stationary visual cues in preflight tests remained so inflight. One other became more visually dependent inflight, but recovered postflight. Susceptibility to illusions of circular self-motion increased in flight. The time to the onset of linear self-motion illusions decreased and the illusion magnitude significantly increased for most subjects while free floating in weightlessness. These decreased toward one-G levels when the subject 'stood up' in weightlessness by wearing constant force springs. For several subjects, changing the relative direction of the subjective vertical in weightlessness-either by body rotation or by simply cognitively initiating a visual reorientation-altered the illusion of convexity produced when viewing a flat, shaded disc. It changed at least one person's ability to recognize previously presented two-dimensional shapes. Overall, results show that most astronauts become more dependent on dynamic visual motion cues and some become responsive to stationary orientation cues. The direction of the subjective vertical is labile in the absence of gravity. This can interfere with the ability to properly interpret shading, or to recognize complex objects in different orientations.

  9. Two eyes for two purposes: in situ evidence for asymmetric vision in the cockeyed squids Histioteuthis heteropsis and Stigmatoteuthis dofleini

    PubMed Central

    Robison, Bruce H.

    2017-01-01

    The light environment of the mesopelagic realm of the ocean changes with both depth and viewer orientation, and this has probably driven the high diversity of visual adaptations found among its inhabitants. The mesopelagic ‘cockeyed’ squids of family Histioteuthidae have unusual eyes, as the left and right eyes are dimorphic in size, shape and sometimes lens pigmentation. This dimorphism may be an adaptation to the two different sources of light in the mesopelagic realm, with the large eye oriented upward to view objects silhouetted against the dim, downwelling sunlight and the small eye oriented slightly downward to view bioluminescent point sources. We used in situ video footage from remotely operated vehicles in the Monterey Submarine Canyon to observe the orientation behaviour of 152 Histioteuthis heteropsis and nine Stigmatoteuthis dofleini. We found evidence for upward orientation in the large eye and slightly downward orientation in the small eye, which was facilitated by a tail-up oblique body orientation. We also found that 65% of adult H. heteropsis (n = 69) had yellow pigmentation in the lens of the larger left eye, which may be used to break the counterillumination camouflage of their prey. Finally, we used visual modelling to show that the visual returns provided by increasing eye size are much higher for an upward-oriented eye than for a downward-oriented eye, which may explain the development of this unique visual strategy. This article is part of the themed issue ‘Vision in dim light’. PMID:28193814

  10. Attention to memory: orienting attention to sound object representations.

    PubMed

    Backer, Kristina C; Alain, Claude

    2014-01-01

    Despite a growing acceptance that attention and memory interact, and that attention can be focused on an active internal mental representation (i.e., reflective attention), there has been a paucity of work focusing on reflective attention to 'sound objects' (i.e., mental representations of actual sound sources in the environment). Further research on the dynamic interactions between auditory attention and memory, as well as its degree of neuroplasticity, is important for understanding how sound objects are represented, maintained, and accessed in the brain. This knowledge can then guide the development of training programs to help individuals with attention and memory problems. This review article focuses on attention to memory with an emphasis on behavioral and neuroimaging studies that have begun to explore the mechanisms that mediate reflective attentional orienting in vision and more recently, in audition. Reflective attention refers to situations in which attention is oriented toward internal representations rather than focused on external stimuli. We propose four general principles underlying attention to short-term memory. Furthermore, we suggest that mechanisms involved in orienting attention to visual object representations may also apply for orienting attention to sound object representations.

  11. The application of the unified modeling language in object-oriented analysis of healthcare information systems.

    PubMed

    Aggarwal, Vinod

    2002-10-01

    This paper concerns itself with the beneficial effects of the Unified Modeling Language (UML), a nonproprietary object modeling standard, in specifying, visualizing, constructing, documenting, and communicating the model of a healthcare information system from the user's perspective. The author outlines the process of object-oriented analysis (OOA) using the UML and illustrates this with healthcare examples to demonstrate the practicality of application of the UML by healthcare personnel to real-world information system problems. The UML will accelerate advanced uses of object-orientation such as reuse technology, resulting in significantly higher software productivity. The UML is also applicable in the context of a component paradigm that promises to enhance the capabilities of healthcare information systems and simplify their management and maintenance.

  12. Crowding by Invisible Flankers

    PubMed Central

    Ho, Cristy; Cheung, Sing-Hang

    2011-01-01

    Background Human object recognition degrades sharply as the target object moves from central vision into peripheral vision. In particular, one's ability to recognize a peripheral target is severely impaired by the presence of flanking objects, a phenomenon known as visual crowding. Recent studies on how visual awareness of flanker existence influences crowding had shown mixed results. More importantly, it is not known whether conscious awareness of the existence of both the target and flankers are necessary for crowding to occur. Methodology/Principal Findings Here we show that crowding persists even when people are completely unaware of the flankers, which are rendered invisible through the continuous flash suppression technique. Contrast threshold for identifying the orientation of a grating pattern was elevated in the flanked condition, even when the subjects reported that they were unaware of the perceptually suppressed flankers. Moreover, we find that orientation-specific adaptation is attenuated by flankers even when both the target and flankers are invisible. Conclusions These findings complement the suggested correlation between crowding and visual awareness. What's more, our results demonstrate that conscious awareness and attention are not prerequisite for crowding. PMID:22194919

  13. Object oriented classification of high resolution data for inventory of horticultural crops

    NASA Astrophysics Data System (ADS)

    Hebbar, R.; Ravishankar, H. M.; Trivedi, S.; Subramoniam, S. R.; Uday, R.; Dadhwal, V. K.

    2014-11-01

    High resolution satellite images are associated with large variance and thus, per pixel classifiers often result in poor accuracy especially in delineation of horticultural crops. In this context, object oriented techniques are powerful and promising methods for classification. In the present study, a semi-automatic object oriented feature extraction model has been used for delineation of horticultural fruit and plantation crops using Erdas Objective Imagine. Multi-resolution data from Resourcesat LISS-IV and Cartosat-1 have been used as source data in the feature extraction model. Spectral and textural information along with NDVI were used as inputs for generation of Spectral Feature Probability (SFP) layers using sample training pixels. The SFP layers were then converted into raster objects using threshold and clump function resulting in pixel probability layer. A set of raster and vector operators was employed in the subsequent steps for generating thematic layer in the vector format. This semi-automatic feature extraction model was employed for classification of major fruit and plantations crops viz., mango, banana, citrus, coffee and coconut grown under different agro-climatic conditions. In general, the classification accuracy of about 75-80 per cent was achieved for these crops using object based classification alone and the same was further improved using minimal visual editing of misclassified areas. A comparison of on-screen visual interpretation with object oriented approach showed good agreement. It was observed that old and mature plantations were classified more accurately while young and recently planted ones (3 years or less) showed poor classification accuracy due to mixed spectral signature, wider spacing and poor stands of plantations. The results indicated the potential use of object oriented approach for classification of high resolution data for delineation of horticultural fruit and plantation crops. The present methodology is applicable at local levels and future development is focused on up-scaling the methodology for generation of fruit and plantation crop maps at regional and national level which is important for creation of database for overall horticultural crop development.

  14. The continuous end-state comfort effect: weighted integration of multiple biases.

    PubMed

    Herbort, Oliver; Butz, Martin V

    2012-05-01

    The grasp orientation when grasping an object is frequently aligned in anticipation of the intended rotation of the object (end-state comfort effect). We analyzed grasp orientation selection in a continuous task to determine the mechanisms underlying the end-state comfort effect. Participants had to grasp a box by a circular handle-which allowed for arbitrary grasp orientations-and then had to rotate the box by various angles. Experiments 1 and 2 revealed both that the rotation's direction considerably determined grasp orientations and that end-postures varied considerably. Experiments 3 and 4 further showed that visual stimuli and initial arm postures biased grasp orientations if the intended rotation could be easily achieved. The data show that end-state comfort but also other factors determine grasp orientation selection. A simple mechanism that integrates multiple weighted biases can account for the data.

  15. Visual agnosia and focal brain injury.

    PubMed

    Martinaud, O

    Visual agnosia encompasses all disorders of visual recognition within a selective visual modality not due to an impairment of elementary visual processing or other cognitive deficit. Based on a sequential dichotomy between the perceptual and memory systems, two different categories of visual object agnosia are usually considered: 'apperceptive agnosia' and 'associative agnosia'. Impaired visual recognition within a single category of stimuli is also reported in: (i) visual object agnosia of the ventral pathway, such as prosopagnosia (for faces), pure alexia (for words), or topographagnosia (for landmarks); (ii) visual spatial agnosia of the dorsal pathway, such as cerebral akinetopsia (for movement), or orientation agnosia (for the placement of objects in space). Focal brain injuries provide a unique opportunity to better understand regional brain function, particularly with the use of effective statistical approaches such as voxel-based lesion-symptom mapping (VLSM). The aim of the present work was twofold: (i) to review the various agnosia categories according to the traditional visual dual-pathway model; and (ii) to better assess the anatomical network underlying visual recognition through lesion-mapping studies correlating neuroanatomical and clinical outcomes. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  16. From optics to attention: visual perception in barn owls.

    PubMed

    Harmening, Wolf M; Wagner, Hermann

    2011-11-01

    Barn owls are nocturnal predators which have evolved specific sensory and morphological adaptations to a life in dim light. Here, some of the most fundamental properties of spatial vision in barn owls are reviewed. The eye with its tubular shape is rigidly integrated in the skull so that eye movements are very much restricted. The eyes are oriented frontally, allowing for a large binocular overlap. Accommodation, but not pupil dilation, is coupled between the two eyes. The retina is rod dominated and lacks a visible fovea. Retinal ganglion cells form a marked region of highest density that extends to a horizontally oriented visual streak. Behavioural visual acuity and contrast sensitivity are poor, although the optical quality of the ocular media is excellent. A low f-number allows high image quality at low light levels. Vernier acuity was found to be a hyperacute percept. Owls have global stereopsis with hyperacute stereo acuity thresholds. Neurons of the visual Wulst are sensitive to binocular disparities. Orientation based saliency was demonstrated in a visual-search experiment, and higher cognitive abilities were shown when the owl's were able to use illusory contours for object discrimination.

  17. Semi-Supervised Tensor-Based Graph Embedding Learning and Its Application to Visual Discriminant Tracking.

    PubMed

    Hu, Weiming; Gao, Jin; Xing, Junliang; Zhang, Chao; Maybank, Stephen

    2017-01-01

    An appearance model adaptable to changes in object appearance is critical in visual object tracking. In this paper, we treat an image patch as a two-order tensor which preserves the original image structure. We design two graphs for characterizing the intrinsic local geometrical structure of the tensor samples of the object and the background. Graph embedding is used to reduce the dimensions of the tensors while preserving the structure of the graphs. Then, a discriminant embedding space is constructed. We prove two propositions for finding the transformation matrices which are used to map the original tensor samples to the tensor-based graph embedding space. In order to encode more discriminant information in the embedding space, we propose a transfer-learning- based semi-supervised strategy to iteratively adjust the embedding space into which discriminative information obtained from earlier times is transferred. We apply the proposed semi-supervised tensor-based graph embedding learning algorithm to visual tracking. The new tracking algorithm captures an object's appearance characteristics during tracking and uses a particle filter to estimate the optimal object state. Experimental results on the CVPR 2013 benchmark dataset demonstrate the effectiveness of the proposed tracking algorithm.

  18. Attention is required for maintenance of feature binding in visual working memory

    PubMed Central

    Heider, Maike; Husain, Masud

    2013-01-01

    Working memory and attention are intimately connected. However, understanding the relationship between the two is challenging. Currently, there is an important controversy about whether objects in working memory are maintained automatically or require resources that are also deployed for visual or auditory attention. Here we investigated the effects of loading attention resources on precision of visual working memory, specifically on correct maintenance of feature-bound objects, using a dual-task paradigm. Participants were presented with a memory array and were asked to remember either direction of motion of random dot kinematograms of different colour, or orientation of coloured bars. During the maintenance period, they performed a secondary visual or auditory task, with varying levels of load. Following a retention period, they adjusted a coloured probe to match either the motion direction or orientation of stimuli with the same colour in the memory array. This allowed us to examine the effects of an attention-demanding task performed during maintenance on precision of recall on the concurrent working memory task. Systematic increase in attention load during maintenance resulted in a significant decrease in overall working memory performance. Changes in overall performance were specifically accompanied by an increase in feature misbinding errors: erroneous reporting of nontarget motion or orientation. Thus in trials where attention resources were taxed, participants were more likely to respond with nontarget values rather than simply making random responses. Our findings suggest that resources used during attention-demanding visual or auditory tasks also contribute to maintaining feature-bound representations in visual working memory—but not necessarily other aspects of working memory. PMID:24266343

  19. Attention is required for maintenance of feature binding in visual working memory.

    PubMed

    Zokaei, Nahid; Heider, Maike; Husain, Masud

    2014-01-01

    Working memory and attention are intimately connected. However, understanding the relationship between the two is challenging. Currently, there is an important controversy about whether objects in working memory are maintained automatically or require resources that are also deployed for visual or auditory attention. Here we investigated the effects of loading attention resources on precision of visual working memory, specifically on correct maintenance of feature-bound objects, using a dual-task paradigm. Participants were presented with a memory array and were asked to remember either direction of motion of random dot kinematograms of different colour, or orientation of coloured bars. During the maintenance period, they performed a secondary visual or auditory task, with varying levels of load. Following a retention period, they adjusted a coloured probe to match either the motion direction or orientation of stimuli with the same colour in the memory array. This allowed us to examine the effects of an attention-demanding task performed during maintenance on precision of recall on the concurrent working memory task. Systematic increase in attention load during maintenance resulted in a significant decrease in overall working memory performance. Changes in overall performance were specifically accompanied by an increase in feature misbinding errors: erroneous reporting of nontarget motion or orientation. Thus in trials where attention resources were taxed, participants were more likely to respond with nontarget values rather than simply making random responses. Our findings suggest that resources used during attention-demanding visual or auditory tasks also contribute to maintaining feature-bound representations in visual working memory-but not necessarily other aspects of working memory.

  20. Differences in the effects of crowding on size perception and grip scaling in densely cluttered 3-D scenes.

    PubMed

    Chen, Juan; Sperandio, Irene; Goodale, Melvyn Alan

    2015-01-01

    Objects rarely appear in isolation in natural scenes. Although many studies have investigated how nearby objects influence perception in cluttered scenes (i.e., crowding), none has studied how nearby objects influence visually guided action. In Experiment 1, we found that participants could scale their grasp to the size of a crowded target even when they could not perceive its size, demonstrating for the first time that neurologically intact participants can use visual information that is not available to conscious report to scale their grasp to real objects in real scenes. In Experiments 2 and 3, we found that changing the eccentricity of the display and the orientation of the flankers had no effect on grasping but strongly affected perception. The differential effects of eccentricity and flanker orientation on perception and grasping show that the known differences in retinotopy between the ventral and dorsal streams are reflected in the way in which people deal with targets in cluttered scenes. © The Author(s) 2014.

  1. Perceptual grouping enhances visual plasticity.

    PubMed

    Mastropasqua, Tommaso; Turatto, Massimo

    2013-01-01

    Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity.

  2. Brain systems for visual perspective taking and action perception.

    PubMed

    Mazzarella, Elisabetta; Ramsey, Richard; Conson, Massimiliano; Hamilton, Antonia

    2013-01-01

    Taking another person's viewpoint and making sense of their actions are key processes that guide social behavior. Previous neuroimaging investigations have largely studied these processes separately. The current study used functional magnetic resonance imaging to examine how the brain incorporates another person's viewpoint and actions into visual perspective judgments. Participants made a left-right judgment about the location of a target object from their own (egocentric) or an actor's visual perspective (altercentric). Actor location varied around a table and the actor was either reaching or not reaching for the target object. Analyses examined brain regions engaged in the egocentric and altercentric tasks, brain regions where response magnitude tracked the orientation of the actor in the scene and brain regions sensitive to the action performed by the actor. The blood oxygen level-dependent (BOLD) response in dorsomedial prefrontal cortex (dmPFC) was sensitive to actor orientation in the altercentric task, whereas the response in right inferior frontal gyrus (IFG) was sensitive to actor orientation in the egocentric task. Thus, dmPFC and right IFG may play distinct but complementary roles in visual perspective taking (VPT). Observation of a reaching actor compared to a non-reaching actor yielded activation in lateral occipitotemporal cortex, regardless of task, showing that these regions are sensitive to body posture independent of social context. By considering how an observed actor's location and action influence the neural bases of visual perspective judgments, the current study supports the view that multiple neurocognitive "routes" operate during VPT.

  3. Is the Theory of Mind deficit observed in visual paradigms in schizophrenia explained by an impaired attention toward gaze orientation?

    PubMed

    Roux, Paul; Forgeot d'Arc, Baudoin; Passerieux, Christine; Ramus, Franck

    2014-08-01

    Schizophrenia is associated with poor Theory of Mind (ToM), particularly in goal and belief attribution to others. It is also associated with abnormal gaze behaviors toward others: individuals with schizophrenia usually look less to others' face and gaze, which are crucial epistemic cues that contribute to correct mental states inferences. This study tests the hypothesis that impaired ToM in schizophrenia might be related to a deficit in visual attention toward gaze orientation. We adapted a previous non-verbal ToM paradigm consisting of animated cartoons allowing the assessment of goal and belief attribution. In the true and false belief conditions, an object was displaced while an agent was either looking at it or away, respectively. Eye movements were recorded to quantify visual attention to gaze orientation (proportion of time participants spent looking at the head of the agent while the target object changed locations). 29 patients with schizophrenia and 29 matched controls were tested. Compared to controls, patients looked significantly less at the agent's head and had lower performance in belief and goal attribution. Performance in belief and goal attribution significantly increased with the head looking percentage. When the head looking percentage was entered as a covariate, the group effect on belief and goal attribution performance was not significant anymore. Patients' deficit on this visual ToM paradigm is thus entirely explained by a decreased visual attention toward gaze. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. A Bayesian Account of Visual-Vestibular Interactions in the Rod-and-Frame Task.

    PubMed

    Alberts, Bart B G T; de Brouwer, Anouk J; Selen, Luc P J; Medendorp, W Pieter

    2016-01-01

    Panoramic visual cues, as generated by the objects in the environment, provide the brain with important information about gravity direction. To derive an optimal, i.e., Bayesian, estimate of gravity direction, the brain must combine panoramic information with gravity information detected by the vestibular system. Here, we examined the individual sensory contributions to this estimate psychometrically. We asked human subjects to judge the orientation (clockwise or counterclockwise relative to gravity) of a briefly flashed luminous rod, presented within an oriented square frame (rod-in-frame). Vestibular contributions were manipulated by tilting the subject's head, whereas visual contributions were manipulated by changing the viewing distance of the rod and frame. Results show a cyclical modulation of the frame-induced bias in perceived verticality across a 90° range of frame orientations. The magnitude of this bias decreased significantly with larger viewing distance, as if visual reliability was reduced. Biases increased significantly when the head was tilted, as if vestibular reliability was reduced. A Bayesian optimal integration model, with distinct vertical and horizontal panoramic weights, a gain factor to allow for visual reliability changes, and ocular counterroll in response to head tilt, provided a good fit to the data. We conclude that subjects flexibly weigh visual panoramic and vestibular information based on their orientation-dependent reliability, resulting in the observed verticality biases and the associated response variabilities.

  5. Putative pyramidal neurons and interneurons in the monkey parietal cortex make different contributions to the performance of a visual grouping task.

    PubMed

    Yokoi, Isao; Komatsu, Hidehiko

    2010-09-01

    Visual grouping of discrete elements is an important function for object recognition. We recently conducted an experiment to study neural correlates of visual grouping. We recorded neuronal activities while monkeys performed a grouping detection task in which they discriminated visual patterns composed of discrete dots arranged in a cross and detected targets in which dots with the same contrast were aligned horizontally or vertically. We found that some neurons in the lateral bank of the intraparietal sulcus exhibit activity related to visual grouping. In the present study, we analyzed how different types of neurons contribute to visual grouping. We classified the recorded neurons as putative pyramidal neurons or putative interneurons, depending on the duration of their action potentials. We found that putative pyramidal neurons exhibited selectivity for the orientation of the target, and this selectivity was enhanced by attention to a particular target orientation. By contrast, putative interneurons responded more strongly to the target stimuli than to the nontargets, regardless of the orientation of the target. These results suggest that different classes of parietal neurons contribute differently to the grouping of discrete elements.

  6. Viewing Artworks: Contributions of Cognitive Control and Perceptual Facilitation to Aesthetic Experience

    ERIC Educational Resources Information Center

    Cupchik, Gerald C.; Vartanian, Oshin; Crawley, Adrian; Mikulis, David J.

    2009-01-01

    When we view visual images in everyday life, our perception is oriented toward object identification. In contrast, when viewing visual images "as artworks", we also tend to experience subjective reactions to their stylistic and structural properties. This experiment sought to determine how cognitive control and perceptual facilitation contribute…

  7. SSBRP Communication & Data System Development using the Unified Modeling Language (UML)

    NASA Technical Reports Server (NTRS)

    Windrem, May; Picinich, Lou; Givens, John J. (Technical Monitor)

    1998-01-01

    The Unified Modeling Language (UML) is the standard method for specifying, visualizing, and documenting the artifacts of an object-oriented system under development. UML is the unification of the object-oriented methods developed by Grady Booch and James Rumbaugh, and of the Use Case Model developed by Ivar Jacobson. This paper discusses the application of UML by the Communications and Data Systems (CDS) team to model the ground control and command of the Space Station Biological Research Project (SSBRP) User Operations Facility (UOF). UML is used to define the context of the system, the logical static structure, the life history of objects, and the interactions among objects.

  8. The Role of Local and Distal Landmarks in the Development of Object Location Memory

    ERIC Educational Resources Information Center

    Bullens, Jessie; Klugkist, Irene; Postma, Albert

    2011-01-01

    To locate objects in the environment, animals and humans use visual and nonvisual information. We were interested in children's ability to relocate an object on the basis of self-motion and local and distal color cues for orientation. Five- to 9-year-old children were tested on an object location memory task in which, between presentation and…

  9. RF control at SSCL — an object oriented design approach

    NASA Astrophysics Data System (ADS)

    Dohan, D. A.; Osberg, E.; Biggs, R.; Bossom, J.; Chillara, K.; Richter, R.; Wade, D.

    1994-12-01

    The Superconducting Super Collider (SSC) in Texas, the construction of which was stopped in 1994, would have represented a major challenge in accelerator research and development. This paper addresses the issues encountered in the parallel design and construction of the control systems for the RF equipment for the five accelerators comprising the SSC. An extensive analysis of the components of the RF control systems has been undertaken, based upon the Schlaer-Mellor object-oriented analysis and design (OOA/OOD) methodology. The RF subsystem components such as amplifiers, tubes, power supplies, PID loops, etc. were analyzed to produce OOA information, behavior and process models. Using these models, OOD was iteratively applied to develop a generic RF control system design. This paper describes the results of this analysis and the development of 'bridges' between the analysis objects, and the EPICS-based software and underlying VME-based hardware architectures. The application of this approach to several of the SSCL RF control systems is discussed.

  10. Interactions between visual working memory representations.

    PubMed

    Bae, Gi-Yeul; Luck, Steven J

    2017-11-01

    We investigated whether the representations of different objects are maintained independently in working memory or interact with each other. Observers were shown two sequentially presented orientations and required to reproduce each orientation after a delay. The sequential presentation minimized perceptual interactions so that we could isolate interactions between memory representations per se. We found that similar orientations were repelled from each other whereas dissimilar orientations were attracted to each other. In addition, when one of the items was given greater attentional priority by means of a cue, the representation of the high-priority item was not influenced very much by the orientation of the low-priority item, but the representation of the low-priority item was strongly influenced by the orientation of the high-priority item. This indicates that attention modulates the interactions between working memory representations. In addition, errors in the reported orientations of the two objects were positively correlated under some conditions, suggesting that representations of distinct objects may become grouped together in memory. Together, these results demonstrate that working-memory representations are not independent but instead interact with each other in a manner that depends on attentional priority.

  11. Data Acquisition Visualization Development for the MAJORANA DEMONSTRATOR

    NASA Astrophysics Data System (ADS)

    Wendlandt, Laura; Howe, Mark; Wilkerson, John; Majorana Collaboration

    2013-10-01

    The MAJORANA Project is building an array of germanium detectors with very low backgrounds in order to search for neutrinoless double-beta decay, a rare process that, if detected, would give us information about neutrinos. This decay would prove that neutrinos are their own anti-particles, would show that lepton number is not conserved, and would help determine absolute neutrino mass. An object-oriented, data acquisition software program known as ORCA (Object-oriented Real-time Control and Acquisition) will be used to collect data from the array. This paper describes the implementation of computer visualizations for detector calibrations, as well as tools for more general computer modeling in ORCA. Specifically, it details software that converts a CAD file to OpenGL, which can be used in ORCA. This paper also contains information about using a barium-133 source to take measurements from various locations around the detector, to better understand how data varies with detector crystal orientation. Work made possible by National Science Foundation Award OCI-1155614.

  12. Taking a(c)count of eye movements: Multiple mechanisms underlie fixations during enumeration.

    PubMed

    Paul, Jacob M; Reeve, Robert A; Forte, Jason D

    2017-03-01

    We habitually move our eyes when we enumerate sets of objects. It remains unclear whether saccades are directed for numerosity processing as distinct from object-oriented visual processing (e.g., object saliency, scanning heuristics). Here we investigated the extent to which enumeration eye movements are contingent upon the location of objects in an array, and whether fixation patterns vary with enumeration demands. Twenty adults enumerated random dot arrays twice: first to report the set cardinality and second to judge the perceived number of subsets. We manipulated the spatial location of dots by presenting arrays at 0°, 90°, 180°, and 270° orientations. Participants required a similar time to enumerate the set or the perceived number of subsets in the same array. Fixation patterns were systematically shifted in the direction of array rotation, and distributed across similar locations when the same array was shown on multiple occasions. We modeled fixation patterns and dot saliency using a simple filtering model and show participants judged groups of dots in close proximity (2°-2.5° visual angle) as distinct subsets. Modeling results are consistent with the suggestion that enumeration involves visual grouping mechanisms based on object saliency, and specific enumeration demands affect spatial distribution of fixations. Our findings highlight the importance of set computation, rather than object processing per se, for models of numerosity processing.

  13. Effects of pure and hybrid iterative reconstruction algorithms on high-resolution computed tomography in the evaluation of interstitial lung disease.

    PubMed

    Katsura, Masaki; Sato, Jiro; Akahane, Masaaki; Mise, Yoko; Sumida, Kaoru; Abe, Osamu

    2017-08-01

    To compare image quality characteristics of high-resolution computed tomography (HRCT) in the evaluation of interstitial lung disease using three different reconstruction methods: model-based iterative reconstruction (MBIR), adaptive statistical iterative reconstruction (ASIR), and filtered back projection (FBP). Eighty-nine consecutive patients with interstitial lung disease underwent standard-of-care chest CT with 64-row multi-detector CT. HRCT images were reconstructed in 0.625-mm contiguous axial slices using FBP, ASIR, and MBIR. Two radiologists independently assessed the images in a blinded manner for subjective image noise, streak artifacts, and visualization of normal and pathologic structures. Objective image noise was measured in the lung parenchyma. Spatial resolution was assessed by measuring the modulation transfer function (MTF). MBIR offered significantly lower objective image noise (22.24±4.53, P<0.01 among all pairs, Student's t-test) compared with ASIR (39.76±7.41) and FBP (51.91±9.71). MTF (spatial resolution) was increased using MBIR compared with ASIR and FBP. MBIR showed improvements in visualization of normal and pathologic structures over ASIR and FBP, while ASIR was rated quite similarly to FBP. MBIR significantly improved subjective image noise (P<0.01 among all pairs, the sign test), and streak artifacts (P<0.01 each for MBIR vs. the other 2 image data sets). MBIR provides high-quality HRCT images for interstitial lung disease by reducing image noise and streak artifacts and improving spatial resolution compared with ASIR and FBP. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Interpretation of the function of the striate cortex

    NASA Astrophysics Data System (ADS)

    Garner, Bernardette M.; Paplinski, Andrew P.

    2000-04-01

    Biological neural networks do not require retraining every time objects move in the visual field. Conventional computer neural networks do not share this shift-invariance. The brain compensates for movements in the head, body, eyes and objects by allowing the sensory data to be tracked across the visual field. The neurons in the striate cortex respond to objects moving across the field of vision as is seen in many experiments. It is proposed, that the neurons in the striate cortex allow continuous angle changes needed to compensate for changes in orientation of the head, eyes and the motion of objects in the field of vision. It is hypothesized that the neurons in the striate cortex form a system that allows for the translation, some rotation and scaling of objects and provides a continuity of objects as they move relative to other objects. The neurons in the striate cortex respond to features which are fundamental to sight, such as orientation of lines, direction of motion, color and contrast. The neurons that respond to these features are arranged on the cortex in a way that depends on the features they are responding to and on the area of the retina from which they receive their inputs.

  15. Feature-selective Attention in Frontoparietal Cortex: Multivoxel Codes Adjust to Prioritize Task-relevant Information.

    PubMed

    Jackson, Jade; Rich, Anina N; Williams, Mark A; Woolgar, Alexandra

    2017-02-01

    Human cognition is characterized by astounding flexibility, enabling us to select appropriate information according to the objectives of our current task. A circuit of frontal and parietal brain regions, often referred to as the frontoparietal attention network or multiple-demand (MD) regions, are believed to play a fundamental role in this flexibility. There is evidence that these regions dynamically adjust their responses to selectively process information that is currently relevant for behavior, as proposed by the "adaptive coding hypothesis" [Duncan, J. An adaptive coding model of neural function in prefrontal cortex. Nature Reviews Neuroscience, 2, 820-829, 2001]. Could this provide a neural mechanism for feature-selective attention, the process by which we preferentially process one feature of a stimulus over another? We used multivariate pattern analysis of fMRI data during a perceptually challenging categorization task to investigate whether the representation of visual object features in the MD regions flexibly adjusts according to task relevance. Participants were trained to categorize visually similar novel objects along two orthogonal stimulus dimensions (length/orientation) and performed short alternating blocks in which only one of these dimensions was relevant. We found that multivoxel patterns of activation in the MD regions encoded the task-relevant distinctions more strongly than the task-irrelevant distinctions: The MD regions discriminated between stimuli of different lengths when length was relevant and between the same objects according to orientation when orientation was relevant. The data suggest a flexible neural system that adjusts its representation of visual objects to preferentially encode stimulus features that are currently relevant for behavior, providing a neural mechanism for feature-selective attention.

  16. Underwater binocular imaging of aerial objects versus the position of eyes relative to the flat water surface.

    PubMed

    Barta, András; Horváth, Gábor

    2003-12-01

    The apparent position, size, and shape of aerial objects viewed binocularly from water change as a result of the refraction of light at the water surface. Earlier studies of the refraction-distorted structure of the aerial binocular visual field of underwater observers were restricted to either vertically or horizontally oriented eyes. Here we calculate the position of the binocular image point of an aerial object point viewed by two arbitrarily positioned underwater eyes when the water surface is flat. Assuming that binocular image fusion is performed by appropriate vergent eye movements to bring the object's image onto the foveae, the structure of the aerial binocular visual field is computed and visualized as a function of the relative positions of the eyes. We also analyze two erroneous representations of the underwater imaging of aerial objects that have occurred in the literature. It is demonstrated that the structure of the aerial binocular visual field of underwater observers distorted by refraction is more complex than has been thought previously.

  17. Visual stimuli induced by self-motion and object-motion modify odour-guided flight of male moths (Manduca sexta L.).

    PubMed

    Verspui, Remko; Gray, John R

    2009-10-01

    Animals rely on multimodal sensory integration for proper orientation within their environment. For example, odour-guided behaviours often require appropriate integration of concurrent visual cues. To gain a further understanding of mechanisms underlying sensory integration in odour-guided behaviour, our study examined the effects of visual stimuli induced by self-motion and object-motion on odour-guided flight in male M. sexta. By placing stationary objects (pillars) on either side of a female pheromone plume, moths produced self-induced visual motion during odour-guided flight. These flights showed a reduction in both ground and flight speeds and inter-turn interval when compared with flight tracks without stationary objects. Presentation of an approaching 20 cm disc, to simulate object-motion, resulted in interrupted odour-guided flight and changes in flight direction away from the pheromone source. Modifications of odour-guided flight behaviour in the presence of stationary objects suggest that visual information, in conjunction with olfactory cues, can be used to control the rate of counter-turning. We suggest that the behavioural responses to visual stimuli induced by object-motion indicate the presence of a neural circuit that relays visual information to initiate escape responses. These behavioural responses also suggest the presence of a sensory conflict requiring a trade-off between olfactory and visually driven behaviours. The mechanisms underlying olfactory and visual integration are discussed in the context of these behavioural responses.

  18. Effects of Using Alice and Scratch in an Introductory Programming Course for Corrective Instruction

    ERIC Educational Resources Information Center

    Chang, Chih-Kai

    2014-01-01

    Scratch, a visual programming language, was used in many studies in computer science education. Most of them reported positive results by integrating Scratch into K-12 computer courses. However, the object-oriented concept, one of the important computational thinking skills, is not represented well in Scratch. Alice, another visual programming…

  19. From Crib to Kindergarten: A Continuum of Needs of the Visually Impaired Preschooler.

    ERIC Educational Resources Information Center

    Harrell, Lois

    The paper focuses on the needs of visually impaired preschoolers in various developmental areas. The importance of attachment to a significant other for establishing trust is outlined and the fact that body awareness, object permanence, range of motion, spatial awareness and orientation must be logically and actively introduced is cited. Aspects…

  20. Programming Education with a Blocks-Based Visual Language for Mobile Application Development

    ERIC Educational Resources Information Center

    Mihci, Can; Ozdener, Nesrin

    2014-01-01

    The aim of this study is to assess the impact upon academic success of the use of a reference block-based visual programming tool, namely the MIT App Inventor for Android, as an educational instrument for teaching object-oriented GUI-application development (CS2) concepts to students; who have previously completed a fundamental programming course…

  1. An iterative reduced field-of-view reconstruction for periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI.

    PubMed

    Lin, Jyh-Miin; Patterson, Andrew J; Chang, Hing-Chiu; Gillard, Jonathan H; Graves, Martin J

    2015-10-01

    To propose a new reduced field-of-view (rFOV) strategy for iterative reconstructions in a clinical environment. Iterative reconstructions can incorporate regularization terms to improve the image quality of periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI. However, the large amount of calculations required for full FOV iterative reconstructions has posed a huge computational challenge for clinical usage. By subdividing the entire problem into smaller rFOVs, the iterative reconstruction can be accelerated on a desktop with a single graphic processing unit (GPU). This rFOV strategy divides the iterative reconstruction into blocks, based on the block-diagonal dominant structure. A near real-time reconstruction system was developed for the clinical MR unit, and parallel computing was implemented using the object-oriented model. In addition, the Toeplitz method was implemented on the GPU to reduce the time required for full interpolation. Using the data acquired from the PROPELLER MRI, the reconstructed images were then saved in the digital imaging and communications in medicine format. The proposed rFOV reconstruction reduced the gridding time by 97%, as the total iteration time was 3 s even with multiple processes running. A phantom study showed that the structure similarity index for rFOV reconstruction was statistically superior to conventional density compensation (p < 0.001). In vivo study validated the increased signal-to-noise ratio, which is over four times higher than with density compensation. Image sharpness index was improved using the regularized reconstruction implemented. The rFOV strategy permits near real-time iterative reconstruction to improve the image quality of PROPELLER images. Substantial improvements in image quality metrics were validated in the experiments. The concept of rFOV reconstruction may potentially be applied to other kinds of iterative reconstructions for shortened reconstruction duration.

  2. Virtual Environments for People Who Are Visually Impaired Integrated into an Orientation and Mobility Program

    ERIC Educational Resources Information Center

    Lahav, Orly; Schloerb, David W.; Srinivasan, Mandayam A.

    2015-01-01

    Introduction: The BlindAid, a virtual system developed for orientation and mobility (O&M) training of people who are blind or have low vision, allows interaction with different virtual components (structures and objects) via auditory and haptic feedback. This research examined if and how the BlindAid that was integrated within an O&M…

  3. Perceiving environmental structure from optical motion

    NASA Technical Reports Server (NTRS)

    Lappin, Joseph S.

    1991-01-01

    Generally speaking, one of the most important sources of optical information about environmental structure is known to be the deforming optical patterns produced by the movements of the observer (pilot) or environmental objects. As an observer moves through a rigid environment, the projected optical patterns of environmental objects are systematically transformed according to their orientations and positions in 3D space relative to those of the observer. The detailed characteristics of these deforming optical patterns carry information about the 3D structure of the objects and about their locations and orientations relative to those of the observer. The specific geometrical properties of moving images that may constitute visually detected information about the shapes and locations of environmental objects is examined.

  4. Perceptual Grouping Enhances Visual Plasticity

    PubMed Central

    Mastropasqua, Tommaso; Turatto, Massimo

    2013-01-01

    Visual perceptual learning, a manifestation of neural plasticity, refers to improvements in performance on a visual task achieved by training. Attention is known to play an important role in perceptual learning, given that the observer's discriminative ability improves only for those stimulus feature that are attended. However, the distribution of attention can be severely constrained by perceptual grouping, a process whereby the visual system organizes the initial retinal input into candidate objects. Taken together, these two pieces of evidence suggest the interesting possibility that perceptual grouping might also affect perceptual learning, either directly or via attentional mechanisms. To address this issue, we conducted two experiments. During the training phase, participants attended to the contrast of the task-relevant stimulus (oriented grating), while two similar task-irrelevant stimuli were presented in the adjacent positions. One of the two flanking stimuli was perceptually grouped with the attended stimulus as a consequence of its similar orientation (Experiment 1) or because it was part of the same perceptual object (Experiment 2). A test phase followed the training phase at each location. Compared to the task-irrelevant no-grouping stimulus, orientation discrimination improved at the attended location. Critically, a perceptual learning effect equivalent to the one observed for the attended location also emerged for the task-irrelevant grouping stimulus, indicating that perceptual grouping induced a transfer of learning to the stimulus (or feature) being perceptually grouped with the task-relevant one. Our findings indicate that no voluntary effort to direct attention to the grouping stimulus or feature is necessary to enhance visual plasticity. PMID:23301100

  5. Role of orientation reference selection in motion sickness

    NASA Technical Reports Server (NTRS)

    Peterka, Robert J.; Black, F. Owen

    1988-01-01

    Previous experiments with moving platform posturography have shown that different people have varying abilities to resolve conflicts among vestibular, visual, and proprioceptive sensory signals used to control upright posture. In particular, there is one class of subjects with a vestibular disorder known as benign paroxysmal positional vertigo (BPPV) who often are particularly sensitive to inaccurate visual information. That is, they will use visual sensory information for the control of their posture even when that visual information is inaccurate and is in conflict with accurate proprioceptive and vestibular sensory signals. BPPV has been associated with disorders of both posterior semicircular canal function and possibly otolith function. The present proposal hopes to take advantage of the similarities between the space motion sickness problem and the sensory orientation reference selection problems associated with the BPPV syndrome. These similarities include both etiology related to abnormal vertical canal-otolith function, and motion sickness initiating events provoked by pitch and roll head movements. The objectives of this proposal are to explore and quantify the orientation reference selection abilities of subjects and the relation of this selection to motion sickness in humans.

  6. A Bayesian Account of Visual–Vestibular Interactions in the Rod-and-Frame Task

    PubMed Central

    de Brouwer, Anouk J.; Medendorp, W. Pieter

    2016-01-01

    Abstract Panoramic visual cues, as generated by the objects in the environment, provide the brain with important information about gravity direction. To derive an optimal, i.e., Bayesian, estimate of gravity direction, the brain must combine panoramic information with gravity information detected by the vestibular system. Here, we examined the individual sensory contributions to this estimate psychometrically. We asked human subjects to judge the orientation (clockwise or counterclockwise relative to gravity) of a briefly flashed luminous rod, presented within an oriented square frame (rod-in-frame). Vestibular contributions were manipulated by tilting the subject’s head, whereas visual contributions were manipulated by changing the viewing distance of the rod and frame. Results show a cyclical modulation of the frame-induced bias in perceived verticality across a 90° range of frame orientations. The magnitude of this bias decreased significantly with larger viewing distance, as if visual reliability was reduced. Biases increased significantly when the head was tilted, as if vestibular reliability was reduced. A Bayesian optimal integration model, with distinct vertical and horizontal panoramic weights, a gain factor to allow for visual reliability changes, and ocular counterroll in response to head tilt, provided a good fit to the data. We conclude that subjects flexibly weigh visual panoramic and vestibular information based on their orientation-dependent reliability, resulting in the observed verticality biases and the associated response variabilities. PMID:27844055

  7. Learning to Be (In)Variant: Combining Prior Knowledge and Experience to Infer Orientation Invariance in Object Recognition

    ERIC Educational Resources Information Center

    Austerweil, Joseph L.; Griffiths, Thomas L.; Palmer, Stephen E.

    2017-01-01

    How does the visual system recognize images of a novel object after a single observation despite possible variations in the viewpoint of that object relative to the observer? One possibility is comparing the image with a prototype for invariance over a relevant transformation set (e.g., translations and dilations). However, invariance over…

  8. Representations of Shape in Object Recognition and Long-Term Visual Memory

    DTIC Science & Technology

    1993-02-11

    in anything other than linguistic terms ( Biederman , 1987 , for example). STATUS 1. Viewpoint-Dependent Features in Object Representation Tarr and...is object- based orientation-independent representations sufficient for "basic-level" categorization ( Biederman , 1987 ; Corballis, 1988). Alternatively...space. REFERENCES Biederman , I. ( 1987 ). Recognition-by-components: A theory of human image understanding. Psychological Review, 94,115-147. Cooper, L

  9. Responses to Orientation Discontinuities in V1 and V2: Physiological Dissociations and Functional Implications

    PubMed Central

    Purpura, Keith P.; Victor, Jonathan D.

    2014-01-01

    Segmenting the visual image into objects is a crucial stage of visual processing. Object boundaries are typically associated with differences in luminance, but discontinuities in texture also play an important role. We showed previously that a subpopulation of neurons in V2 in anesthetized macaques responds to orientation discontinuities parallel to their receptive field orientation. Such single-cell responses could be a neurophysiological correlate of texture boundary detection. Neurons in V1, on the other hand, are known to have contextual response modulations such as iso-orientation surround suppression, which also produce responses to orientation discontinuities. Here, we use pseudorandom multiregion grating stimuli of two frame durations (20 and 40 ms) to probe and compare texture boundary responses in V1 and V2 in anesthetized macaque monkeys. In V1, responses to texture boundaries were observed for only the 40 ms frame duration and were independent of the orientation of the texture boundary. However, in transient V2 neurons, responses to such texture boundaries were robust for both frame durations and were stronger for boundaries parallel to the neuron's preferred orientation. The dependence of these processes on stimulus duration and orientation indicates that responses to texture boundaries in V2 arise independently of contextual modulations in V1. In addition, because the responses in transient V2 neurons are sensitive to the orientation of the texture boundary but those of V1 neurons are not, we suggest that V2 responses are the correlate of texture boundary detection, whereas contextual modulation in V1 serves other purposes, possibly related to orientation “pop-out.” PMID:24599456

  10. User-oriented evaluation of a medical image retrieval system for radiologists.

    PubMed

    Markonis, Dimitrios; Holzer, Markus; Baroz, Frederic; De Castaneda, Rafael Luis Ruiz; Boyer, Célia; Langs, Georg; Müller, Henning

    2015-10-01

    This article reports the user-oriented evaluation of a text- and content-based medical image retrieval system. User tests with radiologists using a search system for images in the medical literature are presented. The goal of the tests is to assess the usability of the system, identify system and interface aspects that need improvement and useful additions. Another objective is to investigate the system's added value to radiology information retrieval. The study provides an insight into required specifications and potential shortcomings of medical image retrieval systems through a concrete methodology for conducting user tests. User tests with a working image retrieval system of images from the biomedical literature were performed in an iterative manner, where each iteration had the participants perform radiology information seeking tasks and then refining the system as well as the user study design itself. During these tasks the interaction of the users with the system was monitored, usability aspects were measured, retrieval success rates recorded and feedback was collected through survey forms. In total, 16 radiologists participated in the user tests. The success rates in finding relevant information were on average 87% and 78% for image and case retrieval tasks, respectively. The average time for a successful search was below 3 min in both cases. Users felt quickly comfortable with the novel techniques and tools (after 5 to 15 min), such as content-based image retrieval and relevance feedback. User satisfaction measures show a very positive attitude toward the system's functionalities while the user feedback helped identifying the system's weak points. The participants proposed several potentially useful new functionalities, such as filtering by imaging modality and search for articles using image examples. The iterative character of the evaluation helped to obtain diverse and detailed feedback on all system aspects. Radiologists are quickly familiar with the functionalities but have several comments on desired functionalities. The analysis of the results can potentially assist system refinement for future medical information retrieval systems. Moreover, the methodology presented as well as the discussion on the limitations and challenges of such studies can be useful for user-oriented medical image retrieval evaluation, as user-oriented evaluation of interactive system is still only rarely performed. Such interactive evaluations can be limited in effort if done iteratively and can give many insights for developing better systems. Copyright © 2015. Published by Elsevier Ireland Ltd.

  11. Neural Architecture for Feature Binding in Visual Working Memory.

    PubMed

    Schneegans, Sebastian; Bays, Paul M

    2017-04-05

    Binding refers to the operation that groups different features together into objects. We propose a neural architecture for feature binding in visual working memory that employs populations of neurons with conjunction responses. We tested this model using cued recall tasks, in which subjects had to memorize object arrays composed of simple visual features (color, orientation, and location). After a brief delay, one feature of one item was given as a cue, and the observer had to report, on a continuous scale, one or two other features of the cued item. Binding failure in this task is associated with swap errors, in which observers report an item other than the one indicated by the cue. We observed that the probability of swapping two items strongly correlated with the items' similarity in the cue feature dimension, and found a strong correlation between swap errors occurring in spatial and nonspatial report. The neural model explains both swap errors and response variability as results of decoding noisy neural activity, and can account for the behavioral results in quantitative detail. We then used the model to compare alternative mechanisms for binding nonspatial features. We found the behavioral results fully consistent with a model in which nonspatial features are bound exclusively via their shared location, with no indication of direct binding between color and orientation. These results provide evidence for a special role of location in feature binding, and the model explains how this special role could be realized in the neural system. SIGNIFICANCE STATEMENT The problem of feature binding is of central importance in understanding the mechanisms of working memory. How do we remember not only that we saw a red and a round object, but that these features belong together to a single object rather than to different objects in our environment? Here we present evidence for a neural mechanism for feature binding in working memory, based on encoding of visual information by neurons that respond to the conjunction of features. We find clear evidence that nonspatial features are bound via space: we memorize directly where a color or an orientation appeared, but we memorize which color belonged with which orientation only indirectly by virtue of their shared location. Copyright © 2017 Schneegans and Bays.

  12. Early multisensory interactions affect the competition among multiple visual objects.

    PubMed

    Van der Burg, Erik; Talsma, Durk; Olivers, Christian N L; Hickey, Clayton; Theeuwes, Jan

    2011-04-01

    In dynamic cluttered environments, audition and vision may benefit from each other in determining what deserves further attention and what does not. We investigated the underlying neural mechanisms responsible for attentional guidance by audiovisual stimuli in such an environment. Event-related potentials (ERPs) were measured during visual search through dynamic displays consisting of line elements that randomly changed orientation. Search accuracy improved when a target orientation change was synchronized with an auditory signal as compared to when the auditory signal was absent or synchronized with a distractor orientation change. The ERP data show that behavioral benefits were related to an early multisensory interaction over left parieto-occipital cortex (50-60 ms post-stimulus onset), which was followed by an early positive modulation (80-100 ms) over occipital and temporal areas contralateral to the audiovisual event, an enhanced N2pc (210-250 ms), and a contralateral negative slow wave (CNSW). The early multisensory interaction was correlated with behavioral search benefits, indicating that participants with a strong multisensory interaction benefited the most from the synchronized auditory signal. We suggest that an auditory signal enhances the neural response to a synchronized visual event, which increases the chances of selection in a multiple object environment. Copyright © 2010 Elsevier Inc. All rights reserved.

  13. Peripersonal space representation develops independently from visual experience.

    PubMed

    Ricciardi, Emiliano; Menicagli, Dario; Leo, Andrea; Costantini, Marcello; Pietrini, Pietro; Sinigaglia, Corrado

    2017-12-15

    Our daily-life actions are typically driven by vision. When acting upon an object, we need to represent its visual features (e.g. shape, orientation, etc.) and to map them into our own peripersonal space. But what happens with people who have never had any visual experience? How can they map object features into their own peripersonal space? Do they do it differently from sighted agents? To tackle these questions, we carried out a series of behavioral experiments in sighted and congenitally blind subjects. We took advantage of a spatial alignment effect paradigm, which typically refers to a decrease of reaction times when subjects perform an action (e.g., a reach-to-grasp pantomime) congruent with that afforded by a presented object. To systematically examine peripersonal space mapping, we presented visual or auditory affording objects both within and outside subjects' reach. The results showed that sighted and congenitally blind subjects did not differ in mapping objects into their own peripersonal space. Strikingly, this mapping occurred also when objects were presented outside subjects' reach, but within the peripersonal space of another agent. This suggests that (the lack of) visual experience does not significantly affect the development of both one's own and others' peripersonal space representation.

  14. Adaptive management: Chapter 1

    USGS Publications Warehouse

    Allen, Craig R.; Garmestani, Ahjond S.; Allen, Craig R.; Garmestani, Ahjond S.

    2015-01-01

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.

  15. Adaptive management

    USGS Publications Warehouse

    Allen, Craig R.; Garmestani, Ahjond S.

    2015-01-01

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.

  16. Object-oriented design of medical imaging software.

    PubMed

    Ligier, Y; Ratib, O; Logean, M; Girard, C; Perrier, R; Scherrer, J R

    1994-01-01

    A special software package for interactive display and manipulation of medical images was developed at the University Hospital of Geneva, as part of a hospital wide Picture Archiving and Communication System (PACS). This software package, called Osiris, was especially designed to be easily usable and adaptable to the needs of noncomputer-oriented physicians. The Osiris software has been developed to allow the visualization of medical images obtained from any imaging modality. It provides generic manipulation tools, processing tools, and analysis tools more specific to clinical applications. This software, based on an object-oriented paradigm, is portable and extensible. Osiris is available on two different operating systems: the Unix X-11/OSF-Motif based workstations, and the Macintosh family.

  17. Can invertebrates see the e-vector of polarization as a separate modality of light?

    PubMed

    Labhart, Thomas

    2016-12-15

    The visual world is rich in linearly polarized light stimuli, which are hidden from the human eye. But many invertebrate species make use of polarized light as a source of valuable visual information. However, exploiting light polarization does not necessarily imply that the electric (e)-vector orientation of polarized light can be perceived as a separate modality of light. In this Review, I address the question of whether invertebrates can detect specific e-vector orientations in a manner similar to that of humans perceiving spectral stimuli as specific hues. To analyze e-vector orientation, the signals of at least three polarization-sensitive sensors (analyzer channels) with different e-vector tuning axes must be compared. The object-based, imaging polarization vision systems of cephalopods and crustaceans, as well as the water-surface detectors of flying backswimmers, use just two analyzer channels. Although this excludes the perception of specific e-vector orientations, a two-channel system does provide a coarse, categoric analysis of polarized light stimuli, comparable to the limited color sense of dichromatic, 'color-blind' humans. The celestial compass of insects employs three or more analyzer channels. However, that compass is multimodal, i.e. e-vector information merges with directional information from other celestial cues, such as the solar azimuth and the spectral gradient in the sky, masking e-vector information. It seems that invertebrate organisms take no interest in the polarization details of visual stimuli, but polarization vision grants more practical benefits, such as improved object detection and visual communication for cephalopods and crustaceans, compass readings to traveling insects, or the alert 'water below!' to water-seeking bugs. © 2016. Published by The Company of Biologists Ltd.

  18. Can invertebrates see the e-vector of polarization as a separate modality of light?

    PubMed Central

    2016-01-01

    ABSTRACT The visual world is rich in linearly polarized light stimuli, which are hidden from the human eye. But many invertebrate species make use of polarized light as a source of valuable visual information. However, exploiting light polarization does not necessarily imply that the electric (e)-vector orientation of polarized light can be perceived as a separate modality of light. In this Review, I address the question of whether invertebrates can detect specific e-vector orientations in a manner similar to that of humans perceiving spectral stimuli as specific hues. To analyze e-vector orientation, the signals of at least three polarization-sensitive sensors (analyzer channels) with different e-vector tuning axes must be compared. The object-based, imaging polarization vision systems of cephalopods and crustaceans, as well as the water-surface detectors of flying backswimmers, use just two analyzer channels. Although this excludes the perception of specific e-vector orientations, a two-channel system does provide a coarse, categoric analysis of polarized light stimuli, comparable to the limited color sense of dichromatic, ‘color-blind’ humans. The celestial compass of insects employs three or more analyzer channels. However, that compass is multimodal, i.e. e-vector information merges with directional information from other celestial cues, such as the solar azimuth and the spectral gradient in the sky, masking e-vector information. It seems that invertebrate organisms take no interest in the polarization details of visual stimuli, but polarization vision grants more practical benefits, such as improved object detection and visual communication for cephalopods and crustaceans, compass readings to traveling insects, or the alert ‘water below!’ to water-seeking bugs. PMID:27974532

  19. Visual object recognition and tracking

    NASA Technical Reports Server (NTRS)

    Chang, Chu-Yin (Inventor); English, James D. (Inventor); Tardella, Neil M. (Inventor)

    2010-01-01

    This invention describes a method for identifying and tracking an object from two-dimensional data pictorially representing said object by an object-tracking system through processing said two-dimensional data using at least one tracker-identifier belonging to the object-tracking system for providing an output signal containing: a) a type of the object, and/or b) a position or an orientation of the object in three-dimensions, and/or c) an articulation or a shape change of said object in said three dimensions.

  20. The role of lateral occipitotemporal junction and area MT/V5 in the visual analysis of upper-limb postures.

    PubMed

    Peigneux, P; Salmon, E; van der Linden, M; Garraux, G; Aerts, J; Delfiore, G; Degueldre, C; Luxen, A; Orban, G; Franck, G

    2000-06-01

    Humans, like numerous other species, strongly rely on the observation of gestures of other individuals in their everyday life. It is hypothesized that the visual processing of human gestures is sustained by a specific functional architecture, even at an early prelexical cognitive stage, different from that required for the processing of other visual entities. In the present PET study, the neural basis of visual gesture analysis was investigated with functional neuroimaging of brain activity during naming and orientation tasks performed on pictures of either static gestures (upper-limb postures) or tridimensional objects. To prevent automatic object-related cerebral activation during the visual processing of postures, only intransitive postures were selected, i. e., symbolic or meaningless postures which do not imply the handling of objects. Conversely, only intransitive objects which cannot be handled were selected to prevent gesture-related activation during their visual processing. Results clearly demonstrate a significant functional segregation between the processing of static intransitive postures and the processing of intransitive tridimensional objects. Visual processing of objects elicited mainly occipital and fusiform gyrus activity, while visual processing of postures strongly activated the lateral occipitotemporal junction, encroaching upon area MT/V5, involved in motion analysis. These findings suggest that the lateral occipitotemporal junction, working in association with area MT/V5, plays a prominent role in the high-level perceptual analysis of gesture, namely the construction of its visual representation, available for subsequent recognition or imitation. Copyright 2000 Academic Press.

  1. Internal model of gravity influences configural body processing.

    PubMed

    Barra, Julien; Senot, Patrice; Auclair, Laurent

    2017-01-01

    Human bodies are processed by a configural processing mechanism. Evidence supporting this claim is the body inversion effect, in which inversion impairs recognition of bodies more than other objects. Biomechanical configuration, as well as both visual and embodied expertise, has been demonstrated to play an important role in this effect. Nevertheless, the important factor of body inversion effect may also be linked to gravity orientation since gravity is one of the most fundamental constraints of our biology, behavior, and perception on Earth. The visual presentation of an inverted body in a typical body inversion paradigm turns the observed body upside down but also inverts the implicit direction of visual gravity in the scene. The orientation of visual gravity is then in conflict with the direction of actual gravity and may influence configural processing. To test this hypothesis, we dissociated the orientations of the body and of visual gravity by manipulating body posture. In a pretest we showed that it was possible to turn an avatar upside down (inversion relative to retinal coordinates) without inverting the orientation of visual gravity when the avatar stands on his/her hands. We compared the inversion effect in typical conditions (with gravity conflict when the avatar is upside down) to the inversion effect in conditions with no conflict between visual and physical gravity. The results of our experiment revealed that the inversion effect, as measured by both error rate and reaction time, was strongly reduced when there was no gravity conflict. Our results suggest that when an observed body is upside down (inversion relative to participants' retinal coordinates) but the orientation of visual gravity is not, configural processing of bodies might still be possible. In this paper, we discuss the implications of an internal model of gravity in the configural processing of observed bodies. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Novel Fourier-domain constraint for fast phase retrieval in coherent diffraction imaging.

    PubMed

    Latychevskaia, Tatiana; Longchamp, Jean-Nicolas; Fink, Hans-Werner

    2011-09-26

    Coherent diffraction imaging (CDI) for visualizing objects at atomic resolution has been realized as a promising tool for imaging single molecules. Drawbacks of CDI are associated with the difficulty of the numerical phase retrieval from experimental diffraction patterns; a fact which stimulated search for better numerical methods and alternative experimental techniques. Common phase retrieval methods are based on iterative procedures which propagate the complex-valued wave between object and detector plane. Constraints in both, the object and the detector plane are applied. While the constraint in the detector plane employed in most phase retrieval methods requires the amplitude of the complex wave to be equal to the squared root of the measured intensity, we propose a novel Fourier-domain constraint, based on an analogy to holography. Our method allows achieving a low-resolution reconstruction already in the first step followed by a high-resolution reconstruction after further steps. In comparison to conventional schemes this Fourier-domain constraint results in a fast and reliable convergence of the iterative reconstruction process. © 2011 Optical Society of America

  3. The Role of Attention in the Maintenance of Feature Bindings in Visual Short-term Memory

    ERIC Educational Resources Information Center

    Johnson, Jeffrey S.; Hollingworth, Andrew; Luck, Steven J.

    2008-01-01

    This study examined the role of attention in maintaining feature bindings in visual short-term memory. In a change-detection paradigm, participants attempted to detect changes in the colors and orientations of multiple objects; the changes consisted of new feature values in a feature-memory condition and changes in how existing feature values were…

  4. Visual saliency in MPEG-4 AVC video stream

    NASA Astrophysics Data System (ADS)

    Ammar, M.; Mitrea, M.; Hasnaoui, M.; Le Callet, P.

    2015-03-01

    Visual saliency maps already proved their efficiency in a large variety of image/video communication application fields, covering from selective compression and channel coding to watermarking. Such saliency maps are generally based on different visual characteristics (like color, intensity, orientation, motion,…) computed from the pixel representation of the visual content. This paper resumes and extends our previous work devoted to the definition of a saliency map solely extracted from the MPEG-4 AVC stream syntax elements. The MPEG-4 AVC saliency map thus defined is a fusion of static and dynamic map. The static saliency map is in its turn a combination of intensity, color and orientation features maps. Despite the particular way in which all these elementary maps are computed, the fusion techniques allowing their combination plays a critical role in the final result and makes the object of the proposed study. A total of 48 fusion formulas (6 for combining static features and, for each of them, 8 to combine static to dynamic features) are investigated. The performances of the obtained maps are evaluated on a public database organized at IRCCyN, by computing two objective metrics: the Kullback-Leibler divergence and the area under curve.

  5. Application of Least-Squares Adjustment Technique to Geometric Camera Calibration and Photogrammetric Flow Visualization

    NASA Technical Reports Server (NTRS)

    Chen, Fang-Jenq

    1997-01-01

    Flow visualization produces data in the form of two-dimensional images. If the optical components of a camera system are perfect, the transformation equations between the two-dimensional image and the three-dimensional object space are linear and easy to solve. However, real camera lenses introduce nonlinear distortions that affect the accuracy of transformation unless proper corrections are applied. An iterative least-squares adjustment algorithm is developed to solve the nonlinear transformation equations incorporated with distortion corrections. Experimental applications demonstrate that a relative precision on the order of 40,000 is achievable without tedious laboratory calibrations of the camera.

  6. a Robust Method for Stereo Visual Odometry Based on Multiple Euclidean Distance Constraint and Ransac Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Q.; Tong, X.; Liu, S.; Lu, X.; Liu, S.; Chen, P.; Jin, Y.; Xie, H.

    2017-07-01

    Visual Odometry (VO) is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB) features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC) and Random Sample Consensus (RANSAC) algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF) matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation). The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.

  7. Two-stage perceptual learning to break visual crowding.

    PubMed

    Zhu, Ziyun; Fan, Zhenzhi; Fang, Fang

    2016-01-01

    When a target is presented with nearby flankers in the peripheral visual field, it becomes harder to identify, which is referred to as crowding. Crowding sets a fundamental limit of object recognition in peripheral vision, preventing us from fully appreciating cluttered visual scenes. We trained adult human subjects on a crowded orientation discrimination task and investigated whether crowding could be completely eliminated by training. We discovered a two-stage learning process with this training task. In the early stage, when the target and flankers were separated beyond a certain distance, subjects acquired a relatively general ability to break crowding, as evidenced by the fact that the breaking of crowding could transfer to another crowded orientation, even a crowded motion stimulus, although the transfer to the opposite visual hemi-field was weak. In the late stage, like many classical perceptual learning effects, subjects' performance gradually improved and showed specificity to the trained orientation. We also found that, when the target and flankers were spaced too finely, training could only reduce, rather than completely eliminate, the crowding effect. This two-stage learning process illustrates a learning strategy for our brain to deal with the notoriously difficult problem of identifying peripheral objects in clutter. The brain first learned to solve the "easy and general" part of the problem (i.e., improving the processing resolution and segmenting the target and flankers) and then tackle the "difficult and specific" part (i.e., refining the representation of the target).

  8. Visual interface for space and terrestrial analysis

    NASA Technical Reports Server (NTRS)

    Dombrowski, Edmund G.; Williams, Jason R.; George, Arthur A.; Heckathorn, Harry M.; Snyder, William A.

    1995-01-01

    The management of large geophysical and celestial data bases is now, more than ever, the most critical path to timely data analysis. With today's large volume data sets from multiple satellite missions, analysts face the task of defining useful data bases from which data and metadata (information about data) can be extracted readily in a meaningful way. Visualization, following an object-oriented design, is a fundamental method of organizing and handling data. Humans, by nature, easily accept pictorial representations of data. Therefore graphically oriented user interfaces are appealing, as long as they remain simple to produce and use. The Visual Interface for Space and Terrestrial Analysis (VISTA) system, currently under development at the Naval Research Laboratory's Backgrounds Data Center (BDC), has been designed with these goals in mind. Its graphical user interface (GUI) allows the user to perform queries, visualization, and analysis of atmospheric and celestial backgrounds data.

  9. Cross-sensory reference frame transfer in spatial memory: the case of proprioceptive learning.

    PubMed

    Avraamides, Marios N; Sarrou, Mikaella; Kelly, Jonathan W

    2014-04-01

    In three experiments, we investigated whether the information available to visual perception prior to encoding the locations of objects in a path through proprioception would influence the reference direction from which the spatial memory was formed. Participants walked a path whose orientation was misaligned to the walls of the enclosing room and to the square sheet that covered the path prior to learning (Exp. 1) and, in addition, to the intrinsic structure of a layout studied visually prior to walking the path and to the orientation of stripes drawn on the floor (Exps. 2 and 3). Despite the availability of prior visual information, participants constructed spatial memories that were aligned with the canonical axes of the path, as opposed to the reference directions primed by visual experience. The results are discussed in the context of previous studies documenting transfer of reference frames within and across perceptual modalities.

  10. The influence of object similarity and orientation on object-based cueing.

    PubMed

    Hein, Elisabeth; Blaschke, Stefan; Rolke, Bettina

    2017-01-01

    Responses to targets that appear at a noncued position within the same object (invalid-same) compared to a noncued position at an equidistant different object (invalid-different) tend to be faster and more accurate. These cueing effects have been taken as evidence that visual attention can be object based (Egly, Driver, & Rafal, Journal of Experimental Psychology: General, 123, 161-177, 1994). Recent findings, however, have shown that the object-based cueing effect is influenced by object orientation, suggesting that the cueing effect might be due to a more general facilitation of attentional shifts across the horizontal meridian (Al-Janabi & Greenberg, Attention, Perception, & Psychophysics, 1-17, 2016; Pilz, Roggeveen, Creighton, Bennet, & Sekuler, PLOS ONE, 7, e30693, 2012). The aim of this study was to investigate whether the object-based cueing effect is influenced by object similarity and orientation. According to the object-based attention account, objects that are less similar to each other should elicit stronger object-based cueing effects independent of object orientation, whereas the horizontal meridian theory would not predict any effect of object similarity. We manipulated object similarity by using a color (Exp. 1, Exp. 2A) or shape change (Exp. 2B) to distinguish two rectangles in a variation of the classic two-rectangle paradigm (Egly et al., 1994). We found that the object-based cueing effects were influenced by the orientation of the rectangles and strengthened by object dissimilarity. We suggest that object-based cueing effects are strongly affected by the facilitation of attention along the horizontal meridian, but that they also have an object-based attentional component, which is revealed when the dissimilarity between the presented objects is accentuated.

  11. Induced Stress, Artificial Environment, Simulated Tactical Operations Center Model

    DTIC Science & Technology

    1973-06-01

    oriented 4 activities or, at best , tre application of dor:trinal i. 14 concepts to command post exercises. Unlike mechanical skills, weapon’s...training model identified as APSTRAT, an acronym indicating aptitude and strategies , be considered as a point of reference. Several instructional...post providing visual and aural sensing tasks and training objective oriented performance tasks. Vintilly, ho concludes that failure should be

  12. Ground-plane influences on size estimation in early visual processing.

    PubMed

    Champion, Rebecca A; Warren, Paul A

    2010-07-21

    Ground-planes have an important influence on the perception of 3D space (Gibson, 1950) and it has been shown that the assumption that a ground-plane is present in the scene plays a role in the perception of object distance (Bruno & Cutting, 1988). Here, we investigate whether this influence is exerted at an early stage of processing, to affect the rapid estimation of 3D size. Participants performed a visual search task in which they searched for a target object that was larger or smaller than distracter objects. Objects were presented against a background that contained either a frontoparallel or slanted 3D surface, defined by texture gradient cues. We measured the effect on search performance of target location within the scene (near vs. far) and how this was influenced by scene orientation (which, e.g., might be consistent with a ground or ceiling plane, etc.). In addition, we investigated how scene orientation interacted with texture gradient information (indicating surface slant), to determine how these separate cues to scene layout were combined. We found that the difference in target detection performance between targets at the front and rear of the simulated scene was maximal when the scene was consistent with a ground-plane - consistent with the use of an elevation cue to object distance. In addition, we found a significant increase in the size of this effect when texture gradient information (indicating surface slant) was present, but no interaction between texture gradient and scene orientation information. We conclude that scene orientation plays an important role in the estimation of 3D size at an early stage of processing, and suggest that elevation information is linearly combined with texture gradient information for the rapid estimation of 3D size. Copyright 2010 Elsevier Ltd. All rights reserved.

  13. Fear improves mental rotation of low-spatial-frequency visual representation.

    PubMed

    Borst, Grégoire

    2013-10-01

    Previous studies have demonstrated that the brief presentation of a fearful face improves not only low-level visual processing such as contrast and orientation sensitivity but also improves visuospatial processing. In the present study, we investigated whether fear improves mental rotation efficiency (i.e., the mental rotation rate) because of the effect of fear on the sensitivity of magnocellular neurons. We asked 2 groups of participants to perform a mental rotation task with either low-pass or high-pass filtered 3-dimensional objects. Following the presentation of a fearful face, participants mentally rotated objects faster compared with when a neutral face was presented but only for low-pass filtered objects. The results suggest that fear improves mental rotation efficiency by increasing sensitivity to motion-related visual information within the magnocellular pathway.

  14. Toward semantic-based retrieval of visual information: a model-based approach

    NASA Astrophysics Data System (ADS)

    Park, Youngchoon; Golshani, Forouzan; Panchanathan, Sethuraman

    2002-07-01

    This paper center around the problem of automated visual content classification. To enable classification based image or visual object retrieval, we propose a new image representation scheme called visual context descriptor (VCD) that is a multidimensional vector in which each element represents the frequency of a unique visual property of an image or a region. VCD utilizes the predetermined quality dimensions (i.e., types of features and quantization level) and semantic model templates mined in priori. Not only observed visual cues, but also contextually relevant visual features are proportionally incorporated in VCD. Contextual relevance of a visual cue to a semantic class is determined by using correlation analysis of ground truth samples. Such co-occurrence analysis of visual cues requires transformation of a real-valued visual feature vector (e.g., color histogram, Gabor texture, etc.,) into a discrete event (e.g., terms in text). Good-feature to track, rule of thirds, iterative k-means clustering and TSVQ are involved in transformation of feature vectors into unified symbolic representations called visual terms. Similarity-based visual cue frequency estimation is also proposed and used for ensuring the correctness of model learning and matching since sparseness of sample data causes the unstable results of frequency estimation of visual cues. The proposed method naturally allows integration of heterogeneous visual or temporal or spatial cues in a single classification or matching framework, and can be easily integrated into a semantic knowledge base such as thesaurus, and ontology. Robust semantic visual model template creation and object based image retrieval are demonstrated based on the proposed content description scheme.

  15. Leveraging object-oriented development at Ames

    NASA Technical Reports Server (NTRS)

    Wenneson, Greg; Connell, John

    1994-01-01

    This paper presents lessons learned by the Software Engineering Process Group (SEPG) from results of supporting two projects at NASA Ames using an Object Oriented Rapid Prototyping (OORP) approach supported by a full featured visual development environment. Supplemental lessons learned from a large project in progress and a requirements definition are also incorporated. The paper demonstrates how productivity gains can be made by leveraging the developer with a rich development environment, correct and early requirements definition using rapid prototyping, and earlier and better effort estimation and software sizing through object-oriented methods and metrics. Although the individual elements of OO methods, RP approach and OO metrics had been used on other separate projects, the reported projects were the first integrated usage supported by a rich development environment. Overall the approach used was twice as productive (measured by hours per OO Unit) as a C++ development.

  16. Visual short-term memory for oriented, colored objects.

    PubMed

    Shin, Hongsup; Ma, Wei Ji

    2017-08-01

    A central question in the study of visual short-term memory (VSTM) has been whether its basic units are objects or features. Most studies addressing this question have used change detection tasks in which the feature value before the change is highly discriminable from the feature value after the change. This approach assumes that memory noise is negligible, which recent work has shown not to be the case. Here, we investigate VSTM for orientation and color within a noisy-memory framework, using change localization with a variable magnitude of change. A specific consequence of the noise is that it is necessary to model the inference (decision) stage. We find that (a) orientation and color have independent pools of memory resource (consistent with classic results); (b) an irrelevant feature dimension is either encoded but ignored during decision-making, or encoded with low precision and taken into account during decision-making; and (c) total resource available in a given feature dimension is lower in the presence of task-relevant stimuli that are neutral in that feature dimension. We propose a framework in which feature resource comes both in packaged and in targeted form.

  17. HipMatch: an object-oriented cross-platform program for accurate determination of cup orientation using 2D-3D registration of single standard X-ray radiograph and a CT volume.

    PubMed

    Zheng, Guoyan; Zhang, Xuan; Steppacher, Simon D; Murphy, Stephen B; Siebenrock, Klaus A; Tannast, Moritz

    2009-09-01

    The widely used procedure of evaluation of cup orientation following total hip arthroplasty using single standard anteroposterior (AP) radiograph is known inaccurate, largely due to the wide variability in individual pelvic orientation relative to X-ray plate. 2D-3D image registration methods have been introduced for an accurate determination of the post-operative cup alignment with respect to an anatomical reference extracted from the CT data. Although encouraging results have been reported, their extensive usage in clinical routine is still limited. This may be explained by their requirement of a CAD model of the prosthesis, which is often difficult to be organized from the manufacturer due to the proprietary issue, and by their requirement of either multiple radiographs or a radiograph-specific calibration, both of which are not available for most retrospective studies. To address these issues, we developed and validated an object-oriented cross-platform program called "HipMatch" where a hybrid 2D-3D registration scheme combining an iterative landmark-to-ray registration with a 2D-3D intensity-based registration was implemented to estimate a rigid transformation between a pre-operative CT volume and the post-operative X-ray radiograph for a precise estimation of cup alignment. No CAD model of the prosthesis is required. Quantitative and qualitative results evaluated on cadaveric and clinical datasets are given, which indicate the robustness and the accuracy of the program. HipMatch is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway), VTK, and Coin3D and is transportable to any platform.

  18. Reference Frames and 3-D Shape Perception of Pictured Objects: On Verticality and Viewpoint-From-Above

    PubMed Central

    van Doorn, Andrea J.; Wagemans, Johan

    2016-01-01

    Research on the influence of reference frames has generally focused on visual phenomena such as the oblique effect, the subjective visual vertical, the perceptual upright, and ambiguous figures. Another line of research concerns mental rotation studies in which participants had to discriminate between familiar or previously seen 2-D figures or pictures of 3-D objects and their rotated versions. In the present study, we disentangled the influence of the environmental and the viewer-centered reference frame, as classically done, by comparing the performances obtained in various picture and participant orientations. However, this time, the performance is the pictorial relief: the probed 3-D shape percept of the depicted object reconstructed from the local attitude settings of the participant. Comparisons between the pictorial reliefs based on different picture and participant orientations led to two major findings. First, in general, the pictorial reliefs were highly similar if the orientation of the depicted object was vertical with regard to the environmental or the viewer-centered reference frame. Second, a viewpoint-from-above interpretation could almost completely account for the shears occurring between the pictorial reliefs. More specifically, the shears could largely be considered as combinations of slants generated from the viewpoint-from-above, which was determined by the environmental as well as by the viewer-centered reference frame. PMID:27433329

  19. An integrated GIS-based data model for multimodal urban public transportation analysis and management

    NASA Astrophysics Data System (ADS)

    Chen, Shaopei; Tan, Jianjun; Ray, C.; Claramunt, C.; Sun, Qinqin

    2008-10-01

    Diversity is one of the main characteristics of transportation data collected from multiple sources or formats, which can be extremely complex and disparate. Moreover, these multimodal transportation data are usually characterised by spatial and temporal properties. Multimodal transportation network data modelling involves both an engineering and research domain that has attracted the design of a number of spatio-temporal data models in the geographic information system (GIS). However, the application of these specific models to multimodal transportation network is still a challenging task. This research addresses this challenge from both integrated multimodal data organization and object-oriented modelling perspectives, that is, how a complex urban transportation network should be organized, represented and modeled appropriately when considering a multimodal point of view, and using object-oriented modelling method. We proposed an integrated GIS-based data model for multimodal urban transportation network that lays a foundation to enhance the multimodal transportation network analysis and management. This modelling method organizes and integrates multimodal transit network data, and supports multiple representations for spatio-temporal objects and relationship as both visual and graphic views. The data model is expressed by using a spatio-temporal object-oriented modelling method, i.e., the unified modelling language (UML) extended to spatial and temporal plug-in for visual languages (PVLs), which provides an essential support to the spatio-temporal data modelling for transportation GIS.

  20. More than a feeling: incidental learning of array geometry by blindfolded adult humans revealed through touch.

    PubMed

    Sturz, Bradley R; Green, Marshall L; Gaskin, Katherine A; Evans, Alicia C; Graves, April A; Roberts, Jonathan E

    2013-02-15

    View-based matching theories of orientation suggest that mobile organisms encode a visual memory consisting of a visual panorama from a target location and maneuver to reduce discrepancy between current visual perception and this stored visual memory to return to a location. Recent success of such theories to explain the orientation behavior of insects and birds raises questions regarding the extent to which such an explanation generalizes to other species. In the present study, we attempted to determine the extent to which such view-based matching theories may explain the orientation behavior of a mammalian species (in this case adult humans). We modified a traditional enclosure orientation task so that it involved only the use of the haptic sense. The use of a haptic orientation task to investigate the extent to which view-based matching theories may explain the orientation behavior of adult humans appeared ideal because it provided an opportunity for us to explicitly prohibit the use of vision. Specifically, we trained disoriented and blindfolded human participants to search by touch for a target object hidden in one of four locations marked by distinctive textural cues located on top of four discrete landmarks arranged in a rectangular array. Following training, we removed the distinctive textural cues and probed the extent to which participants learned the geometry of the landmark array. In the absence of vision and the trained textural cues, participants showed evidence that they learned the geometry of the landmark array. Such evidence cannot be explained by an appeal to view-based matching strategies and is consistent with explanations of spatial orientation related to the incidental learning of environmental geometry.

  1. Orientation in a crowded environment: can King Penguin (Aptenodytes patagonicus) chicks find their creches after a displacement?

    PubMed

    Nesterova, Anna P; Mardon, Jérôme; Bonadonna, Francesco

    2009-01-01

    For seabird species, the presence of conspecifics in a crowded breeding colony can obstruct locally available orientation cues. Thus, navigation to specific locations can present a challenging problem. We investigated short-range orientation in King Penguin (Aptenodytes patagonicus) chicks that live in a large and densely populated colony. The two main objectives were to determine whether chicks displaced to a novel location away from the colony (i) can orient towards the colony and return to their crèche and (ii) rely on visual or non-visual cues for orientation. To address these questions, a circular arena was constructed 100 m away from the colony. Chicks were released in the arena during the day and at night. After the orientation experiment in the arena, chicks were allowed to return to their home crèche, if they could. Our results showed that, during day trials, chicks preferred the half of the arena closer to the colony, but not at night. However, at night, birds spent more time on ;the colony half' of the arena if the wind blew from the colony direction. When animals were allowed to leave the arena, 98% of chicks homed during the day but only 62% of chicks homed at night. Chicks that homed at night also took longer to find their crèche. The experiments suggest that King Penguin chicks can find their crèche from a novel location. Visual cues are important for homing but, when visual cues are not present, animals are able to make use of other information carried by the wind.

  2. Benefits of object-oriented models and ModeliChart: modern tools and methods for the interdisciplinary research on smart biomedical technology.

    PubMed

    Gesenhues, Jonas; Hein, Marc; Ketelhut, Maike; Habigt, Moriz; Rüschen, Daniel; Mechelinck, Mare; Albin, Thivaharan; Leonhardt, Steffen; Schmitz-Rode, Thomas; Rossaint, Rolf; Autschbach, Rüdiger; Abel, Dirk

    2017-04-01

    Computational models of biophysical systems generally constitute an essential component in the realization of smart biomedical technological applications. Typically, the development process of such models is characterized by a great extent of collaboration between different interdisciplinary parties. Furthermore, due to the fact that many underlying mechanisms and the necessary degree of abstraction of biophysical system models are unknown beforehand, the steps of the development process of the application are iteratively repeated when the model is refined. This paper presents some methods and tools to facilitate the development process. First, the principle of object-oriented (OO) modeling is presented and the advantages over classical signal-oriented modeling are emphasized. Second, our self-developed simulation tool ModeliChart is presented. ModeliChart was designed specifically for clinical users and allows independently performing in silico studies in real time including intuitive interaction with the model. Furthermore, ModeliChart is capable of interacting with hardware such as sensors and actuators. Finally, it is presented how optimal control methods in combination with OO models can be used to realize clinically motivated control applications. All methods presented are illustrated on an exemplary clinically oriented use case of the artificial perfusion of the systemic circulation.

  3. Internal attention to features in visual short-term memory guides object learning

    PubMed Central

    Fan, Judith E.; Turk-Browne, Nicholas B.

    2013-01-01

    Attending to objects in the world affects how we perceive and remember them. What are the consequences of attending to an object in mind? In particular, how does reporting the features of a recently seen object guide visual learning? In three experiments, observers were presented with abstract shapes in a particular color, orientation, and location. After viewing each object, observers were cued to report one feature from visual short-term memory (VSTM). In a subsequent test, observers were cued to report features of the same objects from visual long-term memory (VLTM). We tested whether reporting a feature from VSTM: (1) enhances VLTM for just that feature (practice-benefit hypothesis), (2) enhances VLTM for all features (object-based hypothesis), or (3) simultaneously enhances VLTM for that feature and suppresses VLTM for unreported features (feature-competition hypothesis). The results provided support for the feature-competition hypothesis, whereby the representation of an object in VLTM was biased towards features reported from VSTM and away from unreported features (Experiment 1). This bias could not be explained by the amount of sensory exposure or response learning (Experiment 2) and was amplified by the reporting of multiple features (Experiment 3). Taken together, these results suggest that selective internal attention induces competitive dynamics among features during visual learning, flexibly tuning object representations to align with prior mnemonic goals. PMID:23954925

  4. Internal attention to features in visual short-term memory guides object learning.

    PubMed

    Fan, Judith E; Turk-Browne, Nicholas B

    2013-11-01

    Attending to objects in the world affects how we perceive and remember them. What are the consequences of attending to an object in mind? In particular, how does reporting the features of a recently seen object guide visual learning? In three experiments, observers were presented with abstract shapes in a particular color, orientation, and location. After viewing each object, observers were cued to report one feature from visual short-term memory (VSTM). In a subsequent test, observers were cued to report features of the same objects from visual long-term memory (VLTM). We tested whether reporting a feature from VSTM: (1) enhances VLTM for just that feature (practice-benefit hypothesis), (2) enhances VLTM for all features (object-based hypothesis), or (3) simultaneously enhances VLTM for that feature and suppresses VLTM for unreported features (feature-competition hypothesis). The results provided support for the feature-competition hypothesis, whereby the representation of an object in VLTM was biased towards features reported from VSTM and away from unreported features (Experiment 1). This bias could not be explained by the amount of sensory exposure or response learning (Experiment 2) and was amplified by the reporting of multiple features (Experiment 3). Taken together, these results suggest that selective internal attention induces competitive dynamics among features during visual learning, flexibly tuning object representations to align with prior mnemonic goals. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Perception of second- and third-order orientation signals and their interactions

    PubMed Central

    Victor, Jonathan D.; Thengone, Daniel J.; Conte, Mary M.

    2013-01-01

    Orientation signals, which are crucial to many aspects of visual function, are more complex and varied in the natural world than in the stimuli typically used for laboratory investigation. Gratings and lines have a single orientation, but in natural stimuli, local features have multiple orientations, and multiple orientations can occur even at the same location. Moreover, orientation cues can arise not only from pairwise spatial correlations, but from higher-order ones as well. To investigate these orientation cues and how they interact, we examined segmentation performance for visual textures in which the strengths of different kinds of orientation cues were varied independently, while controlling potential confounds such as differences in luminance statistics. Second-order cues (the kind present in gratings) at different orientations are largely processed independently: There is no cancellation of positive and negative signals at orientations that differ by 45°. Third-order orientation cues are readily detected and interact only minimally with second-order cues. However, they combine across orientations in a different way: Positive and negative signals largely cancel if the orientations differ by 90°. Two additional elements are superimposed on this picture. First, corners play a special role. When second-order orientation cues combine to produce corners, they provide a stronger signal for texture segregation than can be accounted for by their individual effects. Second, while the object versus background distinction does not influence processing of second-order orientation cues, this distinction influences the processing of third-order orientation cues. PMID:23532909

  6. Do reference surfaces influence exocentric pointing?

    PubMed

    Doumen, M J A; Kappers, A M L; Koenderink, J J

    2008-06-01

    All elements of the visual field are known to influence the perception of the egocentric distances of objects. Not only the ground surface of a scene, but also the surface at the back or other objects in the scene can affect an observer's egocentric distance estimation of an object. We tested whether this is also true for exocentric direction estimations. We used an exocentric pointing task to test whether the presence of poster-boards in the visual scene would influence the perception of the exocentric direction between two test-objects. In this task the observer has to direct a pointer, with a remote control, to a target. We placed the poster-boards at various positions in the visual field to test whether these boards would affect the settings of the observer. We found that they only affected the settings when they directly served as a reference for orienting the pointer to the target.

  7. Verbal definitions of familiar objects in blind children reflect their peculiar perceptual experience.

    PubMed

    Vinter, A; Fernandes, V; Orlandi, O; Morgan, P

    2013-11-01

    The aim of the present study was to examine to what extent the verbal definitions of familiar objects produced by blind children reflect their peculiar perceptual experience and, in consequence, differ from those produced by sighted children. Ninety-six visually impaired children, aged between 6 and 14 years, and 32 age-matched sighted children had to define 10 words denoting concrete animate or inanimate familiar objects. The blind children evoked the tactile and auditory characteristics of objects and expressed personal perceptual experiences in their definitions. The sighted children relied on visual perception, and produced more visually oriented verbalism. In contrast, no differences were observed between children in their propensity to include functional attributes in their verbal definitions. The results are discussed in line with embodied views of cognition that postulate mandatory perceptuomotor processing of words during access to their meaning. © 2012 John Wiley & Sons Ltd.

  8. A neurophysiologically plausible population code model for feature integration explains visual crowding.

    PubMed

    van den Berg, Ronald; Roerdink, Jos B T M; Cornelissen, Frans W

    2010-01-22

    An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called "crowding". Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, "compulsory averaging", and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality.

  9. A visual short-term memory advantage for objects of expertise

    PubMed Central

    Curby, Kim M.; Glazek, Kuba; Gauthier, Isabel

    2014-01-01

    Visual short-term memory (VSTM) is limited, especially for complex objects. Its capacity, however, is greater for faces than for other objects, an advantage that may stem from the holistic nature of face processing. If the holistic processing explains this advantage, then object expertise—which also relies on holistic processing—should endow experts with a VSTM advantage. We compared VSTM for cars among car experts to that among car novices. Car experts, but not car novices, demonstrated a VSTM advantage similar to that for faces; this advantage was orientation-specific and was correlated with an individual's level of car expertise. Control experiments ruled out accounts based solely on verbal- or long-term memory representations. These findings suggest that the processing advantages afforded by visual expertise result in domain-specific increases in VSTM capacity, perhaps by allowing experts to maximize the use of an inherently limited VSTM system. PMID:19170473

  10. Heuristics of reasoning and analogy in children's visual perspective taking.

    PubMed

    Yaniv, I; Shatz, M

    1990-10-01

    We propose that children's reasoning about others' visual perspectives is guided by simple heuristics based on a perceiver's line of sight and salient features of the object met by that line. In 3 experiments employing a 2-perceiver analogy task, children aged 3-6 were generally better able to reproduce a perceiver's perspective if a visual cue in the perceiver's line of sight sufficed to distinguish it from alternatives. Children had greater difficulty when the task hinged on attending to configural cues. Availability of distinctive cues affixed on the objects' sides facilitated solution of the symmetrical orientations. These and several other related findings reported in the literature are traced to children's reliance on heuristics of reasoning.

  11. Augmented reality three-dimensional object visualization and recognition with axially distributed sensing.

    PubMed

    Markman, Adam; Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-01-15

    An augmented reality (AR) smartglass display combines real-world scenes with digital information enabling the rapid growth of AR-based applications. We present an augmented reality-based approach for three-dimensional (3D) optical visualization and object recognition using axially distributed sensing (ADS). For object recognition, the 3D scene is reconstructed, and feature extraction is performed by calculating the histogram of oriented gradients (HOG) of a sliding window. A support vector machine (SVM) is then used for classification. Once an object has been identified, the 3D reconstructed scene with the detected object is optically displayed in the smartglasses allowing the user to see the object, remove partial occlusions of the object, and provide critical information about the object such as 3D coordinates, which are not possible with conventional AR devices. To the best of our knowledge, this is the first report on combining axially distributed sensing with 3D object visualization and recognition for applications to augmented reality. The proposed approach can have benefits for many applications, including medical, military, transportation, and manufacturing.

  12. Stimulus homogeneity enhances implicit learning: evidence from contextual cueing.

    PubMed

    Feldmann-Wüstefeld, Tobias; Schubö, Anna

    2014-04-01

    Visual search for a target object is faster if the target is embedded in a repeatedly presented invariant configuration of distractors ('contextual cueing'). It has also been shown that the homogeneity of a context affects the efficiency of visual search: targets receive prioritized processing when presented in a homogeneous context compared to a heterogeneous context, presumably due to grouping processes at early stages of visual processing. The present study investigated in three Experiments whether context homogeneity also affects contextual cueing. In Experiment 1, context homogeneity varied on three levels of the task-relevant dimension (orientation) and contextual cueing was most pronounced for context configurations with high orientation homogeneity. When context homogeneity varied on three levels of the task-irrelevant dimension (color) and orientation homogeneity was fixed, no modulation of contextual cueing was observed: high orientation homogeneity led to large contextual cueing effects (Experiment 2) and low orientation homogeneity led to low contextual cueing effects (Experiment 3), irrespective of color homogeneity. Enhanced contextual cueing for homogeneous context configurations suggest that grouping processes do not only affect visual search but also implicit learning. We conclude that memory representation of context configurations are more easily acquired when context configurations can be processed as larger, grouped perceptual units. However, this form of implicit perceptual learning is only improved by stimulus homogeneity when stimulus homogeneity facilitates grouping processes on a dimension that is currently relevant in the task. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Neuropsychological Component of Imagery Processing

    DTIC Science & Technology

    1991-01-25

    and von Bonin, G. (1951). The Isocortex of Man. Urbana, IL: University of Illinois Press. Bauer, R. M., and Rubens, A. B. (1985). Agnosia . In K. M...Apperceptive agnosia : the specification and description of constructs. In Humphreys, G. W., and Riddoch, M. J. (1987a) (Eds.). Visual Object Processing: A...visual processing: agnosias , achromatopsia, Balint’s syndrome and related difficulties of orientation and construction. In M.-M. Mesulam (Ed

  14. Parameter selection with the Hotelling observer in linear iterative image reconstruction for breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Rose, Sean D.; Roth, Jacob; Zimmerman, Cole; Reiser, Ingrid; Sidky, Emil Y.; Pan, Xiaochuan

    2018-03-01

    In this work we investigate an efficient implementation of a region-of-interest (ROI) based Hotelling observer (HO) in the context of parameter optimization for detection of a rod signal at two orientations in linear iterative image reconstruction for DBT. Our preliminary results suggest that ROI-HO performance trends may be efficiently estimated by modeling only the 2D plane perpendicular to the detector and containing the X-ray source trajectory. In addition, the ROI-HO is seen to exhibit orientation dependent trends in detectability as a function of the regularization strength employed in reconstruction. To further investigate the ROI-HO performance in larger 3D system models, we present and validate an iterative methodology for calculating the ROI-HO. Lastly, we present a real data study investigating the correspondence between ROI-HO performance trends and signal conspicuity. Conspicuity of signals in real data reconstructions is seen to track well with trends in ROI-HO detectability. In particular, we observe orientation dependent conspicuity matching the orientation dependent detectability of the ROI-HO.

  15. Visual Debugging of Object-Oriented Systems With the Unified Modeling Language

    DTIC Science & Technology

    2004-03-01

    to be “the systematic and imaginative use of the technology of interactive computer graphics and the disciplines of graphic design , typography ... Graphics volume 23 no 6, pp893-901, 1999. [SHN98] Shneiderman, B. Designing the User Interface. Strategies for Effective Human-Computer Interaction...System Design Objectives ................................................................................ 44 3.3 System Architecture

  16. Tactical decisions for changeable cuttlefish camouflage: visual cues for choosing masquerade are relevant from a greater distance than visual cues used for background matching.

    PubMed

    Buresch, Kendra C; Ulmer, Kimberly M; Cramer, Corinne; McAnulty, Sarah; Davison, William; Mäthger, Lydia M; Hanlon, Roger T

    2015-10-01

    Cuttlefish use multiple camouflage tactics to evade their predators. Two common tactics are background matching (resembling the background to hinder detection) and masquerade (resembling an uninteresting or inanimate object to impede detection or recognition). We investigated how the distance and orientation of visual stimuli affected the choice of these two camouflage tactics. In the current experiments, cuttlefish were presented with three visual cues: 2D horizontal floor, 2D vertical wall, and 3D object. Each was placed at several distances: directly beneath (in a circle whose diameter was one body length (BL); at zero BL [(0BL); i.e., directly beside, but not beneath the cuttlefish]; at 1BL; and at 2BL. Cuttlefish continued to respond to 3D visual cues from a greater distance than to a horizontal or vertical stimulus. It appears that background matching is chosen when visual cues are relevant only in the immediate benthic surroundings. However, for masquerade, objects located multiple body lengths away remained relevant for choice of camouflage. © 2015 Marine Biological Laboratory.

  17. A new method for recognizing quadric surfaces from range data and its application to telerobotics and automation

    NASA Technical Reports Server (NTRS)

    Alvertos, Nicolas; Dcunha, Ivan

    1992-01-01

    Pose and orientation of an object is one of the central issues in 3-D recognition problems. Most of today's available techniques require considerable pre-processing such as detecting edges or joints, fitting curves or surfaces to segment images, and trying to extract higher order features from the input images. We present a method based on analytical geometry, whereby all the rotation parameters of any quadric surface are determined and subsequently eliminated. This procedure is iterative in nature and was found to converge to the desired results in as few as three iterations. The approach enables us to position the quadric surface in a desired coordinate system, and then to utilize the presented shape information to explicitly represent and recognize the 3-D surface. Experiments were conducted with simulated data for objects such as hyperboloid of one and two sheets, elliptic and hyperbolic paraboloid, elliptic and hyperbolic cylinders, ellipsoids, and quadric cones. Real data of quadric cones and cylinders were also utilized. Both of these sets yielded excellent results.

  18. An Object-oriented Taxonomy of Medical Data Presentations

    PubMed Central

    Starren, Justin; Johnson, Stephen B.

    2000-01-01

    A variety of methods have been proposed for presenting medical data visually on computers. Discussion of and comparison among these methods have been hindered by a lack of consistent terminology. A taxonomy of medical data presentations based on object-oriented user interface principles is presented. Presentations are divided into five major classes—list, table, graph, icon, and generated text. These are subdivided into eight subclasses with simple inheritance and four subclasses with multiple inheritance. The various subclasses are reviewed and examples are provided. Issues critical to the development and evaluation of presentations are also discussed. PMID:10641959

  19. Short-term visual deprivation, tactile acuity, and haptic solid shape discrimination.

    PubMed

    Crabtree, Charles E; Norman, J Farley

    2014-01-01

    Previous psychophysical studies have reported conflicting results concerning the effects of short-term visual deprivation upon tactile acuity. Some studies have found that 45 to 90 minutes of total light deprivation produce significant improvements in participants' tactile acuity as measured with a grating orientation discrimination task. In contrast, a single 2011 study found no such improvement while attempting to replicate these earlier findings. A primary goal of the current experiment was to resolve this discrepancy in the literature by evaluating the effects of a 90-minute period of total light deprivation upon tactile grating orientation discrimination. We also evaluated the potential effect of short-term deprivation upon haptic 3-D shape discrimination using a set of naturally-shaped solid objects. According to previous research, short-term deprivation enhances performance in a tactile 2-D shape discrimination task - perhaps a similar improvement also occurs for haptic 3-D shape discrimination. The results of the current investigation demonstrate that not only does short-term visual deprivation not enhance tactile acuity, it additionally has no effect upon haptic 3-D shape discrimination. While visual deprivation had no effect in our study, there was a significant effect of experience and learning for the grating orientation task - the participants' tactile acuity improved over time, independent of whether they had, or had not, experienced visual deprivation.

  20. Iterating between Tools to Create and Edit Visualizations.

    PubMed

    Bigelow, Alex; Drucker, Steven; Fisher, Danyel; Meyer, Miriah

    2017-01-01

    A common workflow for visualization designers begins with a generative tool, like D3 or Processing, to create the initial visualization; and proceeds to a drawing tool, like Adobe Illustrator or Inkscape, for editing and cleaning. Unfortunately, this is typically a one-way process: once a visualization is exported from the generative tool into a drawing tool, it is difficult to make further, data-driven changes. In this paper, we propose a bridge model to allow designers to bring their work back from the drawing tool to re-edit in the generative tool. Our key insight is to recast this iteration challenge as a merge problem - similar to when two people are editing a document and changes between them need to reconciled. We also present a specific instantiation of this model, a tool called Hanpuku, which bridges between D3 scripts and Illustrator. We show several examples of visualizations that are iteratively created using Hanpuku in order to illustrate the flexibility of the approach. We further describe several hypothetical tools that bridge between other visualization tools to emphasize the generality of the model.

  1. Lateralized electrical brain activity reveals covert attention allocation during speaking.

    PubMed

    Rommers, Joost; Meyer, Antje S; Praamstra, Peter

    2017-01-27

    Speakers usually begin to speak while only part of the utterance has been planned. Earlier work has shown that speech planning processes are reflected in speakers' eye movements as they describe visually presented objects. However, to-be-named objects can be processed to some extent before they have been fixated upon, presumably because attention can be allocated to objects covertly, without moving the eyes. The present study investigated whether EEG could track speakers' covert attention allocation as they produced short utterances to describe pairs of objects (e.g., "dog and chair"). The processing difficulty of each object was varied by presenting it in upright orientation (easy) or in upside down orientation (difficult). Background squares flickered at different frequencies in order to elicit steady-state visual evoked potentials (SSVEPs). The N2pc component, associated with the focusing of attention on an item, was detectable not only prior to speech onset, but also during speaking. The time course of the N2pc showed that attention shifted to each object in the order of mention prior to speech onset. Furthermore, greater processing difficulty increased the time speakers spent attending to each object. This demonstrates that the N2pc can track covert attention allocation in a naming task. In addition, an effect of processing difficulty at around 200-350ms after stimulus onset revealed early attention allocation to the second to-be-named object. The flickering backgrounds elicited SSVEPs, but SSVEP amplitude was not influenced by processing difficulty. These results help complete the picture of the coordination of visual information uptake and motor output during speaking. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Statistical Image Properties in Works from the Prinzhorn Collection of Artists with Schizophrenia

    PubMed Central

    Henemann, Gudrun Maria; Brachmann, Anselm; Redies, Christoph

    2017-01-01

    The Prinzhorn Collection preserves and exhibits thousands of visual artworks by patients who were diagnosed to suffer from mental disease. From this collection, we analyzed 1,256 images by 14 artists who were diagnosed with dementia praecox or schizophrenia. Six objective statistical properties that have been used previously to characterize visually aesthetic images were calculated. These properties reflect features of formal image composition, such as the complexity and distribution of oriented luminance gradients and edges, as well as Fourier spectral properties. Results for the artists with schizophrenia were compared to artworks from three public art collections of paintings and drawings that include highly acclaimed artworks as well as artworks of lesser artistic claim (control artworks). Many of the patients’ works did not differ from these control images. However, the artworks of 6 of the 14 artists with schizophrenia possess image properties that deviate from the range of values obtained for the control artworks. For example, the artworks of four of the patients are characterized by a relative dominance of specific edge orientations in their images (low first-order entropy of edge orientations). Three patients created artworks with a relatively high ratio of fine detail to coarse structure (high slope of the Fourier spectrum). In conclusion, the present exploratory study opens novel perspectives for the objective scientific investigation of visual artworks that were created by persons who suffer from schizophrenia. PMID:29312011

  3. Object-oriented software design in semiautomatic building extraction

    NASA Astrophysics Data System (ADS)

    Guelch, Eberhard; Mueller, Hardo

    1997-08-01

    Developing a system for semiautomatic building acquisition is a complex process, that requires constant integration and updating of software modules and user interfaces. To facilitate these processes we apply an object-oriented design not only for the data but also for the software involved. We use the unified modeling language (UML) to describe the object-oriented modeling of the system in different levels of detail. We can distinguish between use cases from the users point of view, that represent a sequence of actions, yielding in an observable result and the use cases for the programmers, who can use the system as a class library to integrate the acquisition modules in their own software. The structure of the system is based on the model-view-controller (MVC) design pattern. An example from the integration of automated texture extraction for the visualization of results demonstrate the feasibility of this approach.

  4. Mental visualization of objects from cross-sectional images

    PubMed Central

    Wu, Bing; Klatzky, Roberta L.; Stetten, George D.

    2011-01-01

    We extended the classic anorthoscopic viewing procedure to test a model of visualization of 3D structures from 2D cross-sections. Four experiments were conducted to examine key processes described in the model, localizing cross-sections within a common frame of reference and spatiotemporal integration of cross sections into a hierarchical object representation. Participants used a hand-held device to reveal a hidden object as a sequence of cross-sectional images. The process of localization was manipulated by contrasting two displays, in-situ vs. ex-situ, which differed in whether cross sections were presented at their source locations or displaced to a remote screen. The process of integration was manipulated by varying the structural complexity of target objects and their components. Experiments 1 and 2 demonstrated visualization of 2D and 3D line-segment objects and verified predictions about display and complexity effects. In Experiments 3 and 4, the visualized forms were familiar letters and numbers. Errors and orientation effects showed that displacing cross-sectional images to a remote display (ex-situ viewing) impeded the ability to determine spatial relationships among pattern components, a failure of integration at the object level. PMID:22217386

  5. Influence of adaptive statistical iterative reconstruction algorithm on image quality in coronary computed tomography angiography.

    PubMed

    Precht, Helle; Thygesen, Jesper; Gerke, Oke; Egstrup, Kenneth; Waaler, Dag; Lambrechtsen, Jess

    2016-12-01

    Coronary computed tomography angiography (CCTA) requires high spatial and temporal resolution, increased low contrast resolution for the assessment of coronary artery stenosis, plaque detection, and/or non-coronary pathology. Therefore, new reconstruction algorithms, particularly iterative reconstruction (IR) techniques, have been developed in an attempt to improve image quality with no cost in radiation exposure. To evaluate whether adaptive statistical iterative reconstruction (ASIR) enhances perceived image quality in CCTA compared to filtered back projection (FBP). Thirty patients underwent CCTA due to suspected coronary artery disease. Images were reconstructed using FBP, 30% ASIR, and 60% ASIR. Ninety image sets were evaluated by five observers using the subjective visual grading analysis (VGA) and assessed by proportional odds modeling. Objective quality assessment (contrast, noise, and the contrast-to-noise ratio [CNR]) was analyzed with linear mixed effects modeling on log-transformed data. The need for ethical approval was waived by the local ethics committee as the study only involved anonymously collected clinical data. VGA showed significant improvements in sharpness by comparing FBP with ASIR, resulting in odds ratios of 1.54 for 30% ASIR and 1.89 for 60% ASIR ( P  = 0.004). The objective measures showed significant differences between FBP and 60% ASIR ( P  < 0.0001) for noise, with an estimated ratio of 0.82, and for CNR, with an estimated ratio of 1.26. ASIR improved the subjective image quality of parameter sharpness and, objectively, reduced noise and increased CNR.

  6. Application of an object-oriented programming paradigm in three-dimensional computer modeling of mechanically active gastrointestinal tissues.

    PubMed

    Rashev, P Z; Mintchev, M P; Bowes, K L

    2000-09-01

    The aim of this study was to develop a novel three-dimensional (3-D) object-oriented modeling approach incorporating knowledge of the anatomy, electrophysiology, and mechanics of externally stimulated excitable gastrointestinal (GI) tissues and emphasizing the "stimulus-response" principle of extracting the modeling parameters. The modeling method used clusters of class hierarchies representing GI tissues from three perspectives: 1) anatomical; 2) electrophysiological; and 3) mechanical. We elaborated on the first four phases of the object-oriented system development life-cycle: 1) analysis; 2) design; 3) implementation; and 4) testing. Generalized cylinders were used for the implementation of 3-D tissue objects modeling the cecum, the descending colon, and the colonic circular smooth muscle tissue. The model was tested using external neural electrical tissue excitation of the descending colon with virtual implanted electrodes and the stimulating current density distributions over the modeled surfaces were calculated. Finally, the tissue deformations invoked by electrical stimulation were estimated and represented by a mesh-surface visualization technique.

  7. On the use of orientation filters for 3D reconstruction in event-driven stereo vision

    PubMed Central

    Camuñas-Mesa, Luis A.; Serrano-Gotarredona, Teresa; Ieng, Sio H.; Benosman, Ryad B.; Linares-Barranco, Bernabe

    2014-01-01

    The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction. PMID:24744694

  8. Visual-vestibular integration as a function of adaptation to space flight and return to Earth

    NASA Technical Reports Server (NTRS)

    Reschke, Millard R.; Bloomberg, Jacob J.; Harm, Deborah L.; Huebner, William P.; Krnavek, Jody M.; Paloski, William H.; Berthoz, Alan

    1999-01-01

    Research on perception and control of self-orientation and self-motion addresses interactions between action and perception . Self-orientation and self-motion, and the perception of that orientation and motion are required for and modified by goal-directed action. Detailed Supplementary Objective (DSO) 604 Operational Investigation-3 (OI-3) was designed to investigate the integrated coordination of head and eye movements within a structured environment where perception could modify responses and where response could be compensatory for perception. A full understanding of this coordination required definition of spatial orientation models for the microgravity environment encountered during spaceflight.

  9. Monitoring Location and Angular Orientation of a Pill

    NASA Technical Reports Server (NTRS)

    Schipper, John F.

    2012-01-01

    A mobile pill transmitter system moves through, or adjacent to, one or more organs in an animal or human body, while transmitting signals from its present location and/or present angular orientation. The system also provides signals from which the present roll angle of the pill, about a selected axis, can be determined. When the location coordinates angular orientation and the roll angle of the pill are within selected ranges, an aperture on the pill container releases a selected chemical into, or onto, the body. Optionally, the pill, as it moves, provides a sequence of visually perceptible images. The times for image formation may correspond to times at which the pill transmitter system location or image satisfies one of at least four criteria. This invention provides and supplies an algorithm for exact determination of location coordinates and angular orientation coordinates for a mobile pill transmitter (PT), or other similar device that is introduced into, and moves within, a GI tract of a human or animal body. A set of as many as eight nonlinear equations has been developed and applied, relating propagation of a wireless signal between either two, three, or more transmitting antennas located on the PT, to four or more non-coplanar receiving antennas located on a signal receiver appliance worn by the user. The equations are solved exactly, without approximations or iterations, and are applied in several environments: (1) association of a visual image, transmitted by the PT at each of a second sequence of times, with a PT location and PT angular orientation at that time; (2) determination of a position within the body at which a drug or chemical substance or other treatment is to be delivered to a selected portion of the body; (3) monitoring, after delivery, of the effect(s) of administration of the treatment; and (4) determination of one or more positions within the body where provision and examination of a finer-scale image is warranted.

  10. Invariant visual object recognition and shape processing in rats

    PubMed Central

    Zoccolan, Davide

    2015-01-01

    Invariant visual object recognition is the ability to recognize visual objects despite the vastly different images that each object can project onto the retina during natural vision, depending on its position and size within the visual field, its orientation relative to the viewer, etc. Achieving invariant recognition represents such a formidable computational challenge that is often assumed to be a unique hallmark of primate vision. Historically, this has limited the invasive investigation of its neuronal underpinnings to monkey studies, in spite of the narrow range of experimental approaches that these animal models allow. Meanwhile, rodents have been largely neglected as models of object vision, because of the widespread belief that they are incapable of advanced visual processing. However, the powerful array of experimental tools that have been developed to dissect neuronal circuits in rodents has made these species very attractive to vision scientists too, promoting a new tide of studies that have started to systematically explore visual functions in rats and mice. Rats, in particular, have been the subjects of several behavioral studies, aimed at assessing how advanced object recognition and shape processing is in this species. Here, I review these recent investigations, as well as earlier studies of rat pattern vision, to provide an historical overview and a critical summary of the status of the knowledge about rat object vision. The picture emerging from this survey is very encouraging with regard to the possibility of using rats as complementary models to monkeys in the study of higher-level vision. PMID:25561421

  11. Object Selection Costs in Visual Working Memory: A Diffusion Model Analysis of the Focus of Attention

    ERIC Educational Resources Information Center

    Sewell, David K.; Lilburn, Simon D.; Smith, Philip L.

    2016-01-01

    A central question in working memory research concerns the degree to which information in working memory is accessible to other cognitive processes (e.g., decision-making). Theories assuming that the focus of attention can only store a single object at a time require the focus to orient to a target representation before further processing can…

  12. A Proposal for Automatic Fruit Harvesting by Combining a Low Cost Stereovision Camera and a Robotic Arm

    PubMed Central

    Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Runcan, David; Moreno, Javier; Martínez, Dani; Teixidó, Mercè; Palacín, Jordi

    2014-01-01

    This paper proposes the development of an automatic fruit harvesting system by combining a low cost stereovision camera and a robotic arm placed in the gripper tool. The stereovision camera is used to estimate the size, distance and position of the fruits whereas the robotic arm is used to mechanically pickup the fruits. The low cost stereovision system has been tested in laboratory conditions with a reference small object, an apple and a pear at 10 different intermediate distances from the camera. The average distance error was from 4% to 5%, and the average diameter error was up to 30% in the case of a small object and in a range from 2% to 6% in the case of a pear and an apple. The stereovision system has been attached to the gripper tool in order to obtain relative distance, orientation and size of the fruit. The harvesting stage requires the initial fruit location, the computation of the inverse kinematics of the robotic arm in order to place the gripper tool in front of the fruit, and a final pickup approach by iteratively adjusting the vertical and horizontal position of the gripper tool in a closed visual loop. The complete system has been tested in controlled laboratory conditions with uniform illumination applied to the fruits. As a future work, this system will be tested and improved in conventional outdoor farming conditions. PMID:24984059

  13. A proposal for automatic fruit harvesting by combining a low cost stereovision camera and a robotic arm.

    PubMed

    Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Runcan, David; Moreno, Javier; Martínez, Dani; Teixidó, Mercè; Palacín, Jordi

    2014-06-30

    This paper proposes the development of an automatic fruit harvesting system by combining a low cost stereovision camera and a robotic arm placed in the gripper tool. The stereovision camera is used to estimate the size, distance and position of the fruits whereas the robotic arm is used to mechanically pickup the fruits. The low cost stereovision system has been tested in laboratory conditions with a reference small object, an apple and a pear at 10 different intermediate distances from the camera. The average distance error was from 4% to 5%, and the average diameter error was up to 30% in the case of a small object and in a range from 2% to 6% in the case of a pear and an apple. The stereovision system has been attached to the gripper tool in order to obtain relative distance, orientation and size of the fruit. The harvesting stage requires the initial fruit location, the computation of the inverse kinematics of the robotic arm in order to place the gripper tool in front of the fruit, and a final pickup approach by iteratively adjusting the vertical and horizontal position of the gripper tool in a closed visual loop. The complete system has been tested in controlled laboratory conditions with uniform illumination applied to the fruits. As a future work, this system will be tested and improved in conventional outdoor farming conditions.

  14. Homography-based visual servo regulation of mobile robots.

    PubMed

    Fang, Yongchun; Dixon, Warren E; Dawson, Darren M; Chawda, Prakash

    2005-10-01

    A monocular camera-based vision system attached to a mobile robot (i.e., the camera-in-hand configuration) is considered in this paper. By comparing corresponding target points of an object from two different camera images, geometric relationships are exploited to derive a transformation that relates the actual position and orientation of the mobile robot to a reference position and orientation. This transformation is used to synthesize a rotation and translation error system from the current position and orientation to the fixed reference position and orientation. Lyapunov-based techniques are used to construct an adaptive estimate to compensate for a constant, unmeasurable depth parameter, and to prove asymptotic regulation of the mobile robot. The contribution of this paper is that Lyapunov techniques are exploited to craft an adaptive controller that enables mobile robot position and orientation regulation despite the lack of an object model and the lack of depth information. Experimental results are provided to illustrate the performance of the controller.

  15. Support for fast comprehension of ICU data: visualization using metaphor graphics.

    PubMed

    Horn, W; Popow, C; Unterasinger, L

    2001-01-01

    The time-oriented analysis of electronic patient records on (neonatal) intensive care units is a tedious and time-consuming task. Graphic data visualization should make it easier for physicians to assess the overall situation of a patient and to recognize essential changes over time. Metaphor graphics are used to sketch the most relevant parameters for characterizing a patient's situation. By repetition of the graphic object in 24 frames the situation of the ICU patient is presented in one display, usually summarizing the last 24 h. VIE-VISU is a data visualization system which uses multiples to present the change in the patient's status over time in graphic form. Each multiple is a highly structured metaphor graphic object. Each object visualizes important ICU parameters from circulation, ventilation, and fluid balance. The design using multiples promotes a focus on stability and change. A stable patient is recognizable at first sight, continuous improvement or worsening condition are easy to analyze, drastic changes in the patient's situation get the viewers attention immediately.

  16. Effects of material properties and object orientation on precision grip kinematics.

    PubMed

    Paulun, Vivian C; Gegenfurtner, Karl R; Goodale, Melvyn A; Fleming, Roland W

    2016-08-01

    Successfully picking up and handling objects requires taking into account their physical properties (e.g., material) and position relative to the body. Such features are often inferred by sight, but it remains unclear to what extent observers vary their actions depending on the perceived properties. To investigate this, we asked participants to grasp, lift and carry cylinders to a goal location with a precision grip. The cylinders were made of four different materials (Styrofoam, wood, brass and an additional brass cylinder covered with Vaseline) and were presented at six different orientations with respect to the participant (0°, 30°, 60°, 90°, 120°, 150°). Analysis of their grasping kinematics revealed differences in timing and spatial modulation at all stages of the movement that depended on both material and orientation. Object orientation affected the spatial configuration of index finger and thumb during the grasp, but also the timing of handling and transport duration. Material affected the choice of local grasp points and the duration of the movement from the first visual input until release of the object. We find that conditions that make grasping more difficult (orientation with the base pointing toward the participant, high weight and low surface friction) lead to longer durations of individual movement segments and a more careful placement of the fingers on the object.

  17. Maintaining the ties that bind: the role of an intermediate visual memory store in the persistence of awareness.

    PubMed

    Ferber, Susanne; Emrich, Stephen M

    2007-03-01

    Segregation and feature binding are essential to the perception and awareness of objects in a visual scene. When a fragmented line-drawing of an object moves relative to a background of randomly oriented lines, the previously hidden object is segregated from the background and consequently enters awareness. Interestingly, in such shape-from-motion displays, the percept of the object persists briefly when the motion stops, suggesting that the segregated and bound representation of the object is maintained in awareness. Here, we tested whether this persistence effect is mediated by capacity-limited working-memory processes, or by the amount of object-related information available. The experiments demonstrate that persistence is affected mainly by the proportion of object information available and is independent of working-memory limits. We suggest that this persistence effect can be seen as evidence for an intermediate, form-based memory store mediating between sensory and working memory.

  18. A cultural side effect: learning to read interferes with identity processing of familiar objects

    PubMed Central

    Kolinsky, Régine; Fernandes, Tânia

    2014-01-01

    Based on the neuronal recycling hypothesis (Dehaene and Cohen, 2007), we examined whether reading acquisition has a cost for the recognition of non-linguistic visual materials. More specifically, we checked whether the ability to discriminate between mirror images, which develops through literacy acquisition, interferes with object identity judgments, and whether interference strength varies as a function of the nature of the non-linguistic material. To these aims we presented illiterate, late literate (who learned to read at adult age), and early literate adults with an orientation-independent, identity-based same-different comparison task in which they had to respond “same” to both physically identical and mirrored or plane-rotated images of pictures of familiar objects (Experiment 1) or of geometric shapes (Experiment 2). Interference from irrelevant orientation variations was stronger with plane rotations than with mirror images, and stronger with geometric shapes than with objects. Illiterates were the only participants almost immune to mirror variations, but only for familiar objects. Thus, the process of unlearning mirror-image generalization, necessary to acquire literacy in the Latin alphabet, has a cost for a basic function of the visual ventral object recognition stream, i.e., identification of familiar objects. This demonstrates that neural recycling is not just an adaptation to multi-use but a process of at least partial exaptation. PMID:25400605

  19. Scene segmentation by spike synchronization in reciprocally connected visual areas. I. Local effects of cortical feedback.

    PubMed

    Knoblauch, Andreas; Palm, Günther

    2002-09-01

    To investigate scene segmentation in the visual system we present a model of two reciprocally connected visual areas using spiking neurons. Area P corresponds to the orientation-selective subsystem of the primary visual cortex, while the central visual area C is modeled as associative memory representing stimulus objects according to Hebbian learning. Without feedback from area C, a single stimulus results in relatively slow and irregular activity, synchronized only for neighboring patches (slow state), while in the complete model activity is faster with an enlarged synchronization range (fast state). When presenting a superposition of several stimulus objects, scene segmentation happens on a time scale of hundreds of milliseconds by alternating epochs of the slow and fast states, where neurons representing the same object are simultaneously in the fast state. Correlation analysis reveals synchronization on different time scales as found in experiments (designated as tower, castle, and hill peaks). On the fast time scale (tower peaks, gamma frequency range), recordings from two sites coding either different or the same object lead to correlograms that are either flat or exhibit oscillatory modulations with a central peak. This is in agreement with experimental findings, whereas standard phase-coding models would predict shifted peaks in the case of different objects.

  20. Cognitive, perceptual and action-oriented representations of falling objects.

    PubMed

    Zago, Myrka; Lacquaniti, Francesco

    2005-01-01

    We interact daily with moving objects. How accurate are our predictions about objects' motions? What sources of information do we use? These questions have received wide attention from a variety of different viewpoints. On one end of the spectrum are the ecological approaches assuming that all the information about the visual environment is present in the optic array, with no need to postulate conscious or unconscious representations. On the other end of the spectrum are the constructivist approaches assuming that a more or less accurate representation of the external world is built in the brain using explicit or implicit knowledge or memory besides sensory inputs. Representations can be related to naive physics or to context cue-heuristics or to the construction of internal copies of environmental invariants. We address the issue of prediction of objects' fall at different levels. Cognitive understanding and perceptual judgment of simple Newtonian dynamics can be surprisingly inaccurate. By contrast, motor interactions with falling objects are often very accurate. We argue that the pragmatic action-oriented behaviour and the perception-oriented behaviour may use different modes of operation and different levels of representation.

  1. Efficiencies for the statistics of size discrimination.

    PubMed

    Solomon, Joshua A; Morgan, Michael; Chubb, Charles

    2011-10-19

    Different laboratories have achieved a consensus regarding how well human observers can estimate the average orientation in a set of N objects. Such estimates are not only limited by visual noise, which perturbs the visual signal of each object's orientation, they are also inefficient: Observers effectively use only √N objects in their estimates (e.g., S. C. Dakin, 2001; J. A. Solomon, 2010). More controversial is the efficiency with which observers can estimate the average size in an array of circles (e.g., D. Ariely, 2001, 2008; S. C. Chong, S. J. Joo, T.-A. Emmanouil, & A. Treisman, 2008; K. Myczek & D. J. Simons, 2008). Of course, there are some important differences between orientation and size; nonetheless, it seemed sensible to compare the two types of estimate against the same ideal observer. Indeed, quantitative evaluation of statistical efficiency requires this sort of comparison (R. A. Fisher, 1925). Our first step was to measure the noise that limits size estimates when only two circles are compared. Our results (Weber fractions between 0.07 and 0.14 were necessary for 84% correct 2AFC performance) are consistent with the visual system adding the same amount of Gaussian noise to all logarithmically transduced circle diameters. We exaggerated this visual noise by randomly varying the diameters in (uncrowded) arrays of 1, 2, 4, and 8 circles and measured its effect on discrimination between mean sizes. Efficiencies inferred from all four observers significantly exceed 25% and, in two cases, approach 100%. More consistent are our measurements of just-noticeable differences in size variance. These latter results suggest between 62 and 75% efficiency for variance discriminations. Although our observers were no more efficient comparing size variances than they were at comparing mean sizes, they were significantly more precise. In other words, our results contain evidence for a non-negligible source of late noise that limits mean discriminations but not variance discriminations.

  2. Shwirl: Meaningful coloring of spectral cube data with volume rendering

    NASA Astrophysics Data System (ADS)

    Vohl, Dany

    2017-04-01

    Shwirl visualizes spectral data cubes with meaningful coloring methods. The program has been developed to investigate transfer functions, which combines volumetric elements (or voxels) to set the color, and graphics shaders, functions used to compute several properties of the final image such as color, depth, and/or transparency, as enablers for scientific visualization of astronomical data. The program uses Astropy (ascl:1304.002) to handle FITS files and World Coordinate System, Qt (and PyQt) for the user interface, and VisPy, an object-oriented Python visualization library binding onto OpenGL.

  3. 3D reconstruction and spatial auralization of the "Painted Dolmen" of Antelas

    NASA Astrophysics Data System (ADS)

    Dias, Paulo; Campos, Guilherme; Santos, Vítor; Casaleiro, Ricardo; Seco, Ricardo; Sousa Santos, Beatriz

    2008-02-01

    This paper presents preliminary results on the development of a 3D audiovisual model of the Anta Pintada (painted dolmen) of Antelas, a Neolithic chamber tomb located in Oliveira de Frades and listed as Portuguese national monument. The final aim of the project is to create a highly accurate Virtual Reality (VR) model of this unique archaeological site, capable of providing not only visual but also acoustic immersion based on its actual geometry and physical properties. The project started in May 2006 with in situ data acquisition. The 3D geometry of the chamber was captured using a Laser Range Finder. In order to combine the different scans into a complete 3D visual model, reconstruction software based on the Iterative Closest Point (ICP) algorithm was developed using the Visualization Toolkit (VTK). This software computes the boundaries of the room on a 3D uniform grid and populates its interior with "free-space nodes", through an iterative algorithm operating like a torchlight illuminating a dark room. The envelope of the resulting set of "free-space nodes" is used to generate a 3D iso-surface approximating the interior shape of the chamber. Each polygon of this surface is then assigned the acoustic absorption coefficient of the corresponding boundary material. A 3D audiovisual model operating in real-time was developed for a VR Environment comprising head-mounted display (HMD) I-glasses SVGAPro, an orientation sensor (tracker) InterTrax 2 with 3 Degrees Of Freedom (3DOF) and stereo headphones. The auralisation software is based on a geometric model. This constitutes a first approach, since geometric acoustics have well-known limitations in rooms with irregular surfaces. The immediate advantage lies in their inherent computational efficiency, which allows real-time operation. The program computes the early reflections forming the initial part of the chamber's impulse response (IR), which carry the most significant cues for source localisation. These early reflections are processed through Head Related Transfer Functions (HRTF) updated in real-time according to the orientation of the user's head, so that sound waves appear to come from the correct location in space, in agreement with the visual scene. The late-reverberation tail of the IR is generated by an algorithm designed to match the reverberation time of the chamber, calculated from the actual acoustic absorption coefficients of its surfaces. The sound output to the headphones is obtained by convolving the IR with anechoic recordings of the virtual audio source.

  4. Design and test of an object-oriented GIS to map plant species in the Southern Rockies

    NASA Technical Reports Server (NTRS)

    Morain, Stanley A.; Neville, Paul R. H.; Budge, Thomas K.; Morrison, Susan C.; Helfrich, Donald A.; Fruit, Sarah

    1993-01-01

    Elevational and latitudinal shifts occur in the flora of the Rocky Mountains due to long term climate change. In order to specify which species are successfully migrating with these changes, and which are not, an object-oriented, image-based geographic information system (GIS) is being created to animate evolving ecological regimes of temperature and precipitation. Research at the Earth Data Analysis Center (EDAC) is developing a landscape model that includes the spatial, spectral and temporal domains. It is designed to visualize migratory changes in the Rocky Mountain flora, and to specify future community compositions. The object-oriented database will eventually tag each of the nearly 6000 species with a unique hue, intensity, and saturation value, so their movements can be individually traced. An associated GIS includes environmental parameters that control the distribution of each species in the landscape, and satellite imagery is used to help visualize the terrain. Polygons for the GIS are delineated as landform facets that are static in ecological time. The model manages these facets as a triangular irregular net (TIN), and their analysis assesses the gradual progression of species as they migrate through the TIN. Using an appropriate climate change model, the goal will be to stop the modeling process to assess both the rate and direction of species' change and to specify the changing community composition of each landscape facet.

  5. The sophisticated visual system of a tiny Cambrian crustacean: analysis of a stalked fossil compound eye

    PubMed Central

    Schoenemann, Brigitte; Castellani, Christopher; Clarkson, Euan N. K.; Haug, Joachim T.; Maas, Andreas; Haug, Carolin; Waloszek, Dieter

    2012-01-01

    Fossilized compound eyes from the Cambrian, isolated and three-dimensionally preserved, provide remarkable insights into the lifestyle and habitat of their owners. The tiny stalked compound eyes described here probably possessed too few facets to form a proper image, but they represent a sophisticated system for detecting moving objects. The eyes are preserved as almost solid, mace-shaped blocks of phosphate, in which the original positions of the rhabdoms in one specimen are retained as deep cavities. Analysis of the optical axes reveals four visual areas, each with different properties in acuity of vision. They are surveyed by lenses directed forwards, laterally, backwards and inwards, respectively. The most intriguing of these is the putatively inwardly orientated zone, where the optical axes, like those orientated to the front, interfere with axes of the other eye of the contralateral side. The result is a three-dimensional visual net that covers not only the front, but extends also far laterally to either side. Thus, a moving object could be perceived by a two-dimensional coordinate (which is formed by two axes of those facets, one of the left and one of the right eye, which are orientated towards the moving object) in a wide three-dimensional space. This compound eye system enables small arthropods equipped with an eye of low acuity to estimate velocity, size or distance of possible food items efficiently. The eyes are interpreted as having been derived from individuals of the early crustacean Henningsmoenicaris scutula pointing to the existence of highly efficiently developed eyes in the early evolutionary lineage leading towards the modern Crustacea. PMID:22048954

  6. WebStruct and VisualStruct: Web interfaces and visualization for Structure software implemented in a cluster environment.

    PubMed

    Jayashree, B; Rajgopal, S; Hoisington, D; Prasanth, V P; Chandra, S

    2008-09-24

    Structure, is a widely used software tool to investigate population genetic structure with multi-locus genotyping data. The software uses an iterative algorithm to group individuals into "K" clusters, representing possibly K genetically distinct subpopulations. The serial implementation of this programme is processor-intensive even with small datasets. We describe an implementation of the program within a parallel framework. Speedup was achieved by running different replicates and values of K on each node of the cluster. A web-based user-oriented GUI has been implemented in PHP, through which the user can specify input parameters for the programme. The number of processors to be used can be specified in the background command. A web-based visualization tool "Visualstruct", written in PHP (HTML and Java script embedded), allows for the graphical display of population clusters output from Structure, where each individual may be visualized as a line segment with K colors defining its possible genomic composition with respect to the K genetic sub-populations. The advantage over available programs is in the increased number of individuals that can be visualized. The analyses of real datasets indicate a speedup of up to four, when comparing the speed of execution on clusters of eight processors with the speed of execution on one desktop. The software package is freely available to interested users upon request.

  7. Short-Term Visual Deprivation, Tactile Acuity, and Haptic Solid Shape Discrimination

    PubMed Central

    Crabtree, Charles E.; Norman, J. Farley

    2014-01-01

    Previous psychophysical studies have reported conflicting results concerning the effects of short-term visual deprivation upon tactile acuity. Some studies have found that 45 to 90 minutes of total light deprivation produce significant improvements in participants' tactile acuity as measured with a grating orientation discrimination task. In contrast, a single 2011 study found no such improvement while attempting to replicate these earlier findings. A primary goal of the current experiment was to resolve this discrepancy in the literature by evaluating the effects of a 90-minute period of total light deprivation upon tactile grating orientation discrimination. We also evaluated the potential effect of short-term deprivation upon haptic 3-D shape discrimination using a set of naturally-shaped solid objects. According to previous research, short-term deprivation enhances performance in a tactile 2-D shape discrimination task – perhaps a similar improvement also occurs for haptic 3-D shape discrimination. The results of the current investigation demonstrate that not only does short-term visual deprivation not enhance tactile acuity, it additionally has no effect upon haptic 3-D shape discrimination. While visual deprivation had no effect in our study, there was a significant effect of experience and learning for the grating orientation task – the participants' tactile acuity improved over time, independent of whether they had, or had not, experienced visual deprivation. PMID:25397327

  8. Multisensory guidance of orienting behavior.

    PubMed

    Maier, Joost X; Groh, Jennifer M

    2009-12-01

    We use both vision and audition when localizing objects and events in our environment. However, these sensory systems receive spatial information in different coordinate systems: sounds are localized using inter-aural and spectral cues, yielding a head-centered representation of space, whereas the visual system uses an eye-centered representation of space, based on the site of activation on the retina. In addition, the visual system employs a place-coded, retinotopic map of space, whereas the auditory system's representational format is characterized by broad spatial tuning and a lack of topographical organization. A common view is that the brain needs to reconcile these differences in order to control behavior, such as orienting gaze to the location of a sound source. To accomplish this, it seems that either auditory spatial information must be transformed from a head-centered rate code to an eye-centered map to match the frame of reference used by the visual system, or vice versa. Here, we review a number of studies that have focused on the neural basis underlying such transformations in the primate auditory system. Although, these studies have found some evidence for such transformations, many differences in the way the auditory and visual system encode space exist throughout the auditory pathway. We will review these differences at the neural level, and will discuss them in relation to differences in the way auditory and visual information is used in guiding orienting movements.

  9. Behavioural benefits of multisensory processing in ferrets.

    PubMed

    Hammond-Kenny, Amy; Bajo, Victoria M; King, Andrew J; Nodal, Fernando R

    2017-01-01

    Enhanced detection and discrimination, along with faster reaction times, are the most typical behavioural manifestations of the brain's capacity to integrate multisensory signals arising from the same object. In this study, we examined whether multisensory behavioural gains are observable across different components of the localization response that are potentially under the command of distinct brain regions. We measured the ability of ferrets to localize unisensory (auditory or visual) and spatiotemporally coincident auditory-visual stimuli of different durations that were presented from one of seven locations spanning the frontal hemifield. During the localization task, we recorded the head movements made following stimulus presentation, as a metric for assessing the initial orienting response of the ferrets, as well as the subsequent choice of which target location to approach to receive a reward. Head-orienting responses to auditory-visual stimuli were more accurate and faster than those made to visual but not auditory targets, suggesting that these movements were guided principally by sound alone. In contrast, approach-to-target localization responses were more accurate and faster to spatially congruent auditory-visual stimuli throughout the frontal hemifield than to either visual or auditory stimuli alone. Race model inequality analysis of head-orienting reaction times and approach-to-target response times indicates that different processes, probability summation and neural integration, respectively, are likely to be responsible for the effects of multisensory stimulation on these two measures of localization behaviour. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  10. Studying the orientation of bio-objects by nematic liquid crystals

    NASA Astrophysics Data System (ADS)

    Zubtsova, Yu. A.; Kamanin, A. A.; Kamanina, N. V.

    2017-05-01

    We have studied the ability of a liquid-crystal (LC) matrix to visualize and orient DNA molecules. It is established that the relief of the interface between the LC mesophase and conducting contact can be improved without using an additional high-ohmic polymer layer. Spectroscopic and ellipsometric techniques revealed changes in the refractive properties and structure of composites. The obtained results can be used in creating devices for rapid DNA testing with retained form of biostructures.

  11. A Neurophysiologically Plausible Population Code Model for Feature Integration Explains Visual Crowding

    PubMed Central

    van den Berg, Ronald; Roerdink, Jos B. T. M.; Cornelissen, Frans W.

    2010-01-01

    An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called “crowding”. Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, “compulsory averaging”, and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality. PMID:20098499

  12. Design of object-oriented distributed simulation classes

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D. (Principal Investigator)

    1995-01-01

    Distributed simulation of aircraft engines as part of a computer aided design package is being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for 'Numerical Propulsion Simulation System'. NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT 'Actor' model of a concurrent object and uses 'connectors' to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.

  13. Design of Object-Oriented Distributed Simulation Classes

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1995-01-01

    Distributed simulation of aircraft engines as part of a computer aided design package being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for "Numerical Propulsion Simulation System". NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT "Actor" model of a concurrent object and uses "connectors" to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.

  14. Sorting points into neighborhoods (SPIN): data analysis and visualization by ordering distance matrices.

    PubMed

    Tsafrir, D; Tsafrir, I; Ein-Dor, L; Zuk, O; Notterman, D A; Domany, E

    2005-05-15

    We introduce a novel unsupervised approach for the organization and visualization of multidimensional data. At the heart of the method is a presentation of the full pairwise distance matrix of the data points, viewed in pseudocolor. The ordering of points is iteratively permuted in search of a linear ordering, which can be used to study embedded shapes. Several examples indicate how the shapes of certain structures in the data (elongated, circular and compact) manifest themselves visually in our permuted distance matrix. It is important to identify the elongated objects since they are often associated with a set of hidden variables, underlying continuous variation in the data. The problem of determining an optimal linear ordering is shown to be NP-Complete, and therefore an iterative search algorithm with O(n3) step-complexity is suggested. By using sorting points into neighborhoods, i.e. SPIN to analyze colon cancer expression data we were able to address the serious problem of sample heterogeneity, which hinders identification of metastasis related genes in our data. Our methodology brings to light the continuous variation of heterogeneity--starting with homogeneous tumor samples and gradually increasing the amount of another tissue. Ordering the samples according to their degree of contamination by unrelated tissue allows the separation of genes associated with irrelevant contamination from those related to cancer progression. Software package will be available for academic users upon request.

  15. So Wide a Web, So Little Time.

    ERIC Educational Resources Information Center

    McConville, David; And Others

    1996-01-01

    Discusses new trends in the World Wide Web. Highlights include multimedia; digitized audio-visual files; compression technology; telephony; virtual reality modeling language (VRML); open architecture; and advantages of Java, an object-oriented programming language, including platform independence, distributed development, and pay-per-use software.…

  16. Nature as a model for biomimetic sensors

    NASA Astrophysics Data System (ADS)

    Bleckmann, H.

    2012-04-01

    Mammals, like humans, rely mainly on acoustic, visual and olfactory information. In addition, most also use tactile and thermal cues for object identification and spatial orientation. Most non-mammalian animals also possess a visual, acoustic and olfactory system. However, besides these systems they have developed a large variety of highly specialized sensors. For instance, pyrophilous insects use infrared organs for the detection of forest fires while boas, pythons and pit vipers sense the infrared radiation emitted by prey animals. All cartilaginous and bony fishes as well as some amphibians have a mechnaosensory lateral line. It is used for the detection of weak water motions and pressure gradients. For object detection and spatial orientation many species of nocturnal fish employ active electrolocation. This review describes certain aspects of the detection and processing of infrared, mechano- and electrosensory information. It will be shown that the study of these seemingly exotic sensory systems can lead to discoveries that are useful for the construction of technical sensors and artificial control systems.

  17. Object-oriented data model of the municipal transportation

    NASA Astrophysics Data System (ADS)

    Pan, Yuqing; Sheng, Yehua; Zhang, Guiying

    2008-10-01

    The transportation problem is always one of main questions each big city all over the world faces. Managing the municipal transportation using GIS is becoming the important trend. And the data model is the transportation information system foundation. The organization and storage of the data must consider well in the system design. The data model not only needs to meet the demand that the transportation navigates, but also needs to achieve the good visual effects, also can carry on the management and the maintenance to the traffic information. According to the object-oriented theory and the method, the road is divided into segment, intersection. This paper analyzed the driveway, marking, sign and other transportation facilities and the relationship with the segment, intersection and constructed the municipal transportation data model which is adequate to the demand of vehicles navigation, visual and management. The paper also schemes the the all kinds of transportation data. The practice proves that this data model can satisfy the application demands of traffic management system.

  18. Negative emotion enhances mnemonic precision and subjective feelings of remembering in visual long-term memory.

    PubMed

    Xie, Weizhen; Zhang, Weiwei

    2017-09-01

    Negative emotion sometimes enhances memory (higher accuracy and/or vividness, e.g., flashbulb memories). The present study investigates whether it is the qualitative (precision) or quantitative (the probability of successful retrieval) aspect of memory that drives these effects. In a visual long-term memory task, observers memorized colors (Experiment 1a) or orientations (Experiment 1b) of sequentially presented everyday objects under negative, neutral, or positive emotions induced with International Affective Picture System images. In a subsequent test phase, observers reconstructed objects' colors or orientations using the method of adjustment. We found that mnemonic precision was enhanced under the negative condition relative to the neutral and positive conditions. In contrast, the probability of successful retrieval was comparable across the emotion conditions. Furthermore, the boost in memory precision was associated with elevated subjective feelings of remembering (vividness and confidence) and metacognitive sensitivity in Experiment 2. Altogether, these findings suggest a novel precision-based account for emotional memories. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    PubMed

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  20. Independent sources of anisotropy in visual orientation representation: a visual and a cognitive oblique effect.

    PubMed

    Balikou, Panagiota; Gourtzelidis, Pavlos; Mantas, Asimakis; Moutoussis, Konstantinos; Evdokimidis, Ioannis; Smyrnis, Nikolaos

    2015-11-01

    The representation of visual orientation is more accurate for cardinal orientations compared to oblique, and this anisotropy has been hypothesized to reflect a low-level visual process (visual, "class 1" oblique effect). The reproduction of directional and orientation information also leads to a mean error away from cardinal orientations or directions. This anisotropy has been hypothesized to reflect a high-level cognitive process of space categorization (cognitive, "class 2," oblique effect). This space categorization process would be more prominent when the visual representation of orientation degrades such as in the case of working memory with increasing cognitive load, leading to increasing magnitude of the "class 2" oblique effect, while the "class 1" oblique effect would remain unchanged. Two experiments were performed in which an array of orientation stimuli (1-4 items) was presented and then subjects had to realign a probe stimulus within the previously presented array. In the first experiment, the delay between stimulus presentation and probe varied, while in the second experiment, the stimulus presentation time varied. The variable error was larger for oblique compared to cardinal orientations in both experiments reproducing the visual "class 1" oblique effect. The mean error also reproduced the tendency away from cardinal and toward the oblique orientations in both experiments (cognitive "class 2" oblique effect). The accuracy or the reproduced orientation degraded (increasing variable error) and the cognitive "class 2" oblique effect increased with increasing memory load (number of items) in both experiments and presentation time in the second experiment. In contrast, the visual "class 1" oblique effect was not significantly modulated by any one of these experimental factors. These results confirmed the theoretical predictions for the two anisotropies in visual orientation reproduction and provided support for models proposing the categorization of orientation in visual working memory.

  1. Influence of adaptive statistical iterative reconstruction algorithm on image quality in coronary computed tomography angiography

    PubMed Central

    Thygesen, Jesper; Gerke, Oke; Egstrup, Kenneth; Waaler, Dag; Lambrechtsen, Jess

    2016-01-01

    Background Coronary computed tomography angiography (CCTA) requires high spatial and temporal resolution, increased low contrast resolution for the assessment of coronary artery stenosis, plaque detection, and/or non-coronary pathology. Therefore, new reconstruction algorithms, particularly iterative reconstruction (IR) techniques, have been developed in an attempt to improve image quality with no cost in radiation exposure. Purpose To evaluate whether adaptive statistical iterative reconstruction (ASIR) enhances perceived image quality in CCTA compared to filtered back projection (FBP). Material and Methods Thirty patients underwent CCTA due to suspected coronary artery disease. Images were reconstructed using FBP, 30% ASIR, and 60% ASIR. Ninety image sets were evaluated by five observers using the subjective visual grading analysis (VGA) and assessed by proportional odds modeling. Objective quality assessment (contrast, noise, and the contrast-to-noise ratio [CNR]) was analyzed with linear mixed effects modeling on log-transformed data. The need for ethical approval was waived by the local ethics committee as the study only involved anonymously collected clinical data. Results VGA showed significant improvements in sharpness by comparing FBP with ASIR, resulting in odds ratios of 1.54 for 30% ASIR and 1.89 for 60% ASIR (P = 0.004). The objective measures showed significant differences between FBP and 60% ASIR (P < 0.0001) for noise, with an estimated ratio of 0.82, and for CNR, with an estimated ratio of 1.26. Conclusion ASIR improved the subjective image quality of parameter sharpness and, objectively, reduced noise and increased CNR. PMID:28405477

  2. Action Experience Changes Attention to Kinematic Cues

    PubMed Central

    Filippi, Courtney A.; Woodward, Amanda L.

    2016-01-01

    The current study used remote corneal reflection eye-tracking to examine the relationship between motor experience and action anticipation in 13-months-old infants. To measure online anticipation of actions infants watched videos where the actor’s hand provided kinematic information (in its orientation) about the type of object that the actor was going to reach for. The actor’s hand orientation either matched the orientation of a rod (congruent cue) or did not match the orientation of the rod (incongruent cue). To examine relations between motor experience and action anticipation, we used a 2 (reach first vs. observe first) × 2 (congruent kinematic cue vs. incongruent kinematic cue) between-subjects design. We show that 13-months-old infants in the observe first condition spontaneously generate rapid online visual predictions to congruent hand orientation cues and do not visually anticipate when presented incongruent cues. We further demonstrate that the speed that these infants generate predictions to congruent motor cues is correlated with their own ability to pre-shape their hands. Finally, we demonstrate that following reaching experience, infants generate rapid predictions to both congruent and incongruent hand shape cues—suggesting that short-term experience changes attention to kinematics. PMID:26913012

  3. Optical Associative Processors For Visual Perception"

    NASA Astrophysics Data System (ADS)

    Casasent, David; Telfer, Brian

    1988-05-01

    We consider various associative processor modifications required to allow these systems to be used for visual perception, scene analysis, and object recognition. For these applications, decisions on the class of the objects present in the input image are required and thus heteroassociative memories are necessary (rather than the autoassociative memories that have been given most attention). We analyze the performance of both associative processors and note that there is considerable difference between heteroassociative and autoassociative memories. We describe associative processors suitable for realizing functions such as: distortion invariance (using linear discriminant function memory synthesis techniques), noise and image processing performance (using autoassociative memories in cascade with with a heteroassociative processor and with a finite number of autoassociative memory iterations employed), shift invariance (achieved through the use of associative processors operating on feature space data), and the analysis of multiple objects in high noise (which is achieved using associative processing of the output from symbolic correlators). We detail and provide initial demonstrations of the use of associative processors operating on iconic, feature space and symbolic data, as well as adaptive associative processors.

  4. Electrophysiological correlates of retrieval orientation in reality monitoring.

    PubMed

    Rosburg, Timm; Mecklinger, Axel; Johansson, Mikael

    2011-02-14

    Retrieval orientation describes the modulation in the processing of retrieval cues by the nature of the targeted material in memory. Retrieval orientation is usually investigated by analyzing the cortical responses to new (unstudied) material when different memory contents are targeted. This approach avoids confounding effects of retrieval success. We investigated the neural correlates of retrieval orientation in reality monitoring with event-related potentials (ERPs) and assessed the impact of retrieval accuracy on obtained ERP measures. Thirty-two subjects studied visually presented object names that were followed either by a picture of that object (perceived condition) or by the instruction to mentally generate such a picture (imagine condition). Subsequently, subjects had to identify object names of one study condition and reject object names of the second study condition together with newly presented object names. The data analysis showed that object names were more accurately identified when they had been presented in the perceived condition. Two topographically distinct ERP effects of retrieval orientation were revealed: From 600 to 1100 ms after stimulus representation, ERPs were more positive at frontal electrode sites when object names from the imagine condition were targeted. The analysis of response-locked ERP data revealed an additional effect at posterior electrode sites, with more negative ERPs shortly after response onset when items from the imagine condition were targeted. The ERP effect at frontal electrode sites, but not at posterior electrode sites was modulated by relative memory accuracy, with stronger effects in subjects who had lower memory accuracy for items of the imagine condition. The findings are suggestive for a contribution of frontal brain areas to retrieval orientation processes in reality monitoring and indicate that neural correlates of retrieval orientation can be modulated by retrieval effort, with stronger activation of these correlates with increasing task demands. Copyright © 2010 Elsevier Inc. All rights reserved.

  5. Adaptive Modeling Language and Its Derivatives

    NASA Technical Reports Server (NTRS)

    Chemaly, Adel

    2006-01-01

    Adaptive Modeling Language (AML) is the underlying language of an object-oriented, multidisciplinary, knowledge-based engineering framework. AML offers an advanced modeling paradigm with an open architecture, enabling the automation of the entire product development cycle, integrating product configuration, design, analysis, visualization, production planning, inspection, and cost estimation.

  6. An object-oriented approach to data display and storage: 3 years experience, 25,000 cases.

    PubMed

    Sainsbury, D A

    1993-11-01

    Object-oriented programming techniques were used to develop computer based data display and storage systems. These have been operating in the 8 anaesthetising areas of the Adelaide Children's Hospital for 3 years. The analogue and serial outputs from an array of patient monitors are connected to IBM compatible PC-XT computers. The information is displayed on a colour screen as wave-form and trend graphs and digital format in 'real time'. The trend data is printed simultaneously on a dot matrix printer. This data is also stored for 24 hours on 'hard' disk. The major benefit has been the provision of a single visual focus for all monitored variables. The automatic logging of data has been invaluable in the analysis of critical incidents. The systems were made possible by recent, rapid improvements in computer hardware and software. This paper traces the development of the program and demonstrates the advantages of object-oriented programming techniques.

  7. Proceedings of the NATO-Advanced Study Institute on Computer Aided Analysis of Rigid and Flexible Mechanical Systems Held in Troia, Portugal on 27 Jun-9 Jul, 1993. Volume 2. Contributed Papers

    DTIC Science & Technology

    1993-07-09

    Calculate Oil and solve iteratively equation (18) for q and (l)-(S) forex . 4, Solve the velocity problemn through equation (19) to calculate q and (6)-(10) to...object.oriented models for the database to store the system information f1l. Using OOP on the formalism level is more difficult and a current field of...Multidimensional Physical Systems: Graph-theoretic Modeling, Systems and Cybernetics, vol 21 (1992), 5 .9-71 JV A RELATIONAL DATABASE FOR GENERAL

  8. Postural orientation in microgravity depends on straightening up movement performed

    NASA Astrophysics Data System (ADS)

    Vaugoyeau, Marianne; Assaiante, Christine

    2009-08-01

    Whether the vertical body orientation depends on the initial posture and/or the type of straightening up movement is the main question raised in this paper. Another objective was to specify the compensatory role of visual input while adopting an erected posture during microgravity. The final body orientation was analysed in microgravity during parabolic flights. After either (1) straightening up movement from a crouching or (2) a sitting posture, with and without vision. The main results are the following: (1) a vertical erected final posture is correctly achieved after sit to stand movement, whereas all subjects were tilted forward after straightening up from a crouching posture and (2) vision may contribute to correct final posture. These results suggest the existence of a re-weighting of the remaining sensory information, visual information, contact cutaneous cues and proprioceptive information under microgravity condition. We can put forward the alternative hypothesis that the control of body orientation under microgravity condition may also be achieved on the basis of a postural body scheme, that seems to be dependant on the type of movement and/ or the initial position of the whole body.

  9. Perception Of "Features" And "Objects": Applications To The Design Of Instrument Panel Displays

    NASA Astrophysics Data System (ADS)

    Poynter, Douglas; Czarnomski, Alan J.

    1988-10-01

    An experiment was conducted to determine whether socalled feature displays allow for faster and more accurate processing compared to object displays. Previous psychological studies indicate that features can be processed in parallel across the visual field, whereas objects must be processed one at a time with the aid of attentional focus. Numbers and letters are examples of objects; line orientation and color are examples of features. In this experiment, subjects were asked to search displays composed of up to 16 elements for the presence of specific elements. The ability to detect, localize, and identify targets was influenced by display format. Digital errors increased with the number of elements, the number of targets, and the distance of the target from the fixation point. Line orientation errors increased only with the number of targets. Several other display types were evaluated, and each produced a pattern of errors similar to either digital or line orientation format. Results of the study were discussed in terms of Feature Integration Theory, which distinguishes between elements that are processed with parallel versus serial mechanisms.

  10. Sexual orientation and spatial position effects on selective forms of object location memory.

    PubMed

    Rahman, Qazi; Newland, Cherie; Smyth, Beatrice Mary

    2011-04-01

    Prior research has demonstrated robust sex and sexual orientation-related differences in object location memory in humans. Here we show that this sexual variation may depend on the spatial position of target objects and the task-specific nature of the spatial array. We tested the recovery of object locations in three object arrays (object exchanges, object shifts, and novel objects) relative to veridical center (left compared to right side of the arrays) in a sample of 35 heterosexual men, 35 heterosexual women, and 35 homosexual men. Relative to heterosexual men, heterosexual women showed better location recovery in the right side of the array during object exchanges and homosexual men performed better in the right side during novel objects. However, the difference between heterosexual and homosexual men disappeared after controlling for IQ. Heterosexual women and homosexual men did not differ significantly from each other in location change detection with respect to task or side of array. These data suggest that visual space biases in processing categorical spatial positions may enhance aspects of object location memory in heterosexual women. Copyright © 2010 Elsevier Inc. All rights reserved.

  11. An Object-Oriented Approach for Analyzing CALIPSO's Profile Observations

    NASA Astrophysics Data System (ADS)

    Trepte, C. R.

    2016-12-01

    The CALIPSO satellite mission is a pioneering international partnership between NASA and the French Space Agency, CNES. Since launch on 28 April 2006, CALIPSO has been acquiring near-continuous lidar profile observations of clouds and aerosols in the Earth's atmosphere. Many studies have profitably used these observations to advance our understanding of climate, weather and air quality. For the most part, however, these studies have considered CALIPSO profile measurements independent from one another and have not related each to neighboring or family observations within a cloud element or aerosol feature. In this presentation we describe an alternative approach that groups measurements into objects visually identified from CALIPSO browse images. The approach makes use of the Visualization of CALIPSO (VOCAL) software tool that enables a user to outline a region of interest and save coordinates into a database. The selected features or objects can then be analyzed to explore spatial correlations over the feature's domain and construct bulk statistical properties for each structure. This presentation will show examples that examine cirrus and dust layers and will describe how this object-oriented approach can provide added insight into physical processes beyond conventional statistical treatments. It will further show results with combined measurements from other A-Train sensors to highlight advantages of viewing features in this manner.

  12. Muon tomography imaging algorithms for nuclear threat detection inside large volume containers with the Muon Portal detector

    NASA Astrophysics Data System (ADS)

    Riggi, S.; Antonuccio-Delogu, V.; Bandieramonte, M.; Becciani, U.; Costa, A.; La Rocca, P.; Massimino, P.; Petta, C.; Pistagna, C.; Riggi, F.; Sciacca, E.; Vitello, F.

    2013-11-01

    Muon tomographic visualization techniques try to reconstruct a 3D image as close as possible to the real localization of the objects being probed. Statistical algorithms under test for the reconstruction of muon tomographic images in the Muon Portal Project are discussed here. Autocorrelation analysis and clustering algorithms have been employed within the context of methods based on the Point Of Closest Approach (POCA) reconstruction tool. An iterative method based on the log-likelihood approach was also implemented. Relative merits of all such methods are discussed, with reference to full GEANT4 simulations of different scenarios, incorporating medium and high-Z objects inside a container.

  13. Neural representations of contextual guidance in visual search of real-world scenes.

    PubMed

    Preston, Tim J; Guo, Fei; Das, Koel; Giesbrecht, Barry; Eckstein, Miguel P

    2013-05-01

    Exploiting scene context and object-object co-occurrence is critical in guiding eye movements and facilitating visual search, yet the mediating neural mechanisms are unknown. We used functional magnetic resonance imaging while observers searched for target objects in scenes and used multivariate pattern analyses (MVPA) to show that the lateral occipital complex (LOC) can predict the coarse spatial location of observers' expectations about the likely location of 213 different targets absent from the scenes. In addition, we found weaker but significant representations of context location in an area related to the orienting of attention (intraparietal sulcus, IPS) as well as a region related to scene processing (retrosplenial cortex, RSC). Importantly, the degree of agreement among 100 independent raters about the likely location to contain a target object in a scene correlated with LOC's ability to predict the contextual location while weaker but significant effects were found in IPS, RSC, the human motion area, and early visual areas (V1, V3v). When contextual information was made irrelevant to observers' behavioral task, the MVPA analysis of LOC and the other areas' activity ceased to predict the location of context. Thus, our findings suggest that the likely locations of targets in scenes are represented in various visual areas with LOC playing a key role in contextual guidance during visual search of objects in real scenes.

  14. Shape of magnifiers affects controllability in children with visual impairment.

    PubMed

    Liebrand-Schurink, Joyce; Boonstra, F Nienke; van Rens, Ger H M B; Cillessen, Antonius H N; Meulenbroek, Ruud G J; Cox, Ralf F A

    2016-12-01

    This study aimed to examine the controllability of cylinder-shaped and dome-shaped magnifiers in young children with visual impairment. This study investigates goal-directed arm movements in low-vision aid use (stand and dome magnifier-like object) in a group of young children with visual impairment (n = 56) compared to a group of children with normal sight (n = 66). Children with visual impairment and children with normal sight aged 4-8 years executed two types of movements (cyclic and discrete) in two orientations (vertical or horizontal) over two distances (10 cm and 20 cm) with two objects resembling the size and shape of regularly prescribed stand and dome magnifiers. The visually impaired children performed slower movements than the normally sighted children. In both groups, the accuracy and speed of the reciprocal aiming movements improved significantly with age. Surprisingly, in both groups, the performance with the dome-shaped object was significantly faster (in the 10 cm condition and 20 cm condition with discrete movements) and more accurate (in the 20 cm condition) than with the stand-shaped object. From a controllability perspective, this study suggests that it is better to prescribe dome-shaped than cylinder-shaped magnifiers to young children with visual impairment. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  15. The 4-D approach to visual control of autonomous systems

    NASA Technical Reports Server (NTRS)

    Dickmanns, Ernst D.

    1994-01-01

    Development of a 4-D approach to dynamic machine vision is described. Core elements of this method are spatio-temporal models oriented towards objects and laws of perspective projection in a foward mode. Integration of multi-sensory measurement data was achieved through spatio-temporal models as invariants for object recognition. Situation assessment and long term predictions were allowed through maintenance of a symbolic 4-D image of processes involving objects. Behavioral capabilities were easily realized by state feedback and feed-foward control.

  16. Biologically Inspired Model for Inference of 3D Shape from Texture

    PubMed Central

    Gomez, Olman; Neumann, Heiko

    2016-01-01

    A biologically inspired model architecture for inferring 3D shape from texture is proposed. The model is hierarchically organized into modules roughly corresponding to visual cortical areas in the ventral stream. Initial orientation selective filtering decomposes the input into low-level orientation and spatial frequency representations. Grouping of spatially anisotropic orientation responses builds sketch-like representations of surface shape. Gradients in orientation fields and subsequent integration infers local surface geometry and globally consistent 3D depth. From the distributions in orientation responses summed in frequency, an estimate of the tilt and slant of the local surface can be obtained. The model suggests how 3D shape can be inferred from texture patterns and their image appearance in a hierarchically organized processing cascade along the cortical ventral stream. The proposed model integrates oriented texture gradient information that is encoded in distributed maps of orientation-frequency representations. The texture energy gradient information is defined by changes in the grouped summed normalized orientation-frequency response activity extracted from the textured object image. This activity is integrated by directed fields to generate a 3D shape representation of a complex object with depth ordering proportional to the fields output, with higher activity denoting larger distance in relative depth away from the viewer. PMID:27649387

  17. Utilizing OODB schema modeling for vocabulary management.

    PubMed Central

    Gu, H.; Cimino, J. J.; Halper, M.; Geller, J.; Perl, Y.

    1996-01-01

    Comprehension of complex controlled vocabularies is often difficult. We present a method, facilitated by an object-oriented database, for depicting such a vocabulary (the Medical Entities Dictionary (MED) from the Columbia-Presbyterian Medical Center) in a schematic way which uses a sparse inheritance network of area classes. The resulting Object Oriented Health Vocabulary repository (OOHVR) allows visualization of the 43,000 MED concepts as 90 area classes. This view has provided valuable information to those responsible with maintaining the MED. As a result, the MED organization has been improved and some previously-unrecognized errors and inconsistencies have been removed. We believe that this schematic approach allows improved comprehension of the gestalt of large controlled medical vocabulary. PMID:8947671

  18. Antennal pointing at a looming object in the cricket Acheta domesticus.

    PubMed

    Yamawaki, Yoshifumi; Ishibashi, Wakako

    2014-01-01

    Antennal pointing responses to approaching objects were observed in the house cricket Acheta domesticus. In response to a ball approaching from the lateral side, crickets oriented the antenna ipsilateral to the ball towards it. In response to a ball approaching from the front, crickets oriented both antennae forward. Response rates of antennal pointing were higher when the ball was approaching from the front than from behind. The antennal angle ipsilateral to the approaching ball was positively correlated with approaching angle of the ball. Obstructing the cricket's sight decreased the response rate of antennal pointing, suggesting that this response was elicited mainly by visual stimuli. Although the response rates of antennal pointing decreased when the object ceased its approach at a great distance from the cricket, antennal pointing appeared to be resistant to habituation and was not substantially affected by the velocity, size and trajectory of an approaching ball. When presented with computer-generated visual stimuli, crickets frequently showed the antennal pointing response to a darkening stimulus as well as looming and linearly-expanding stimuli. Drifting gratings rarely elicited the antennal pointing. These results suggest that luminance change is sufficient to elicit antennal pointing. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Attention modulates perception of visual space

    PubMed Central

    Zhou, Liu; Deng, Chenglong; Ooi, Teng Leng; He, Zijiang J.

    2017-01-01

    Attention readily facilitates the detection and discrimination of objects, but it is not known whether it helps to form the vast volume of visual space that contains the objects and where actions are implemented. Conventional wisdom suggests not, given the effortless ease with which we perceive three-dimensional (3D) scenes on opening our eyes. Here, we show evidence to the contrary. In Experiment 1, the observer judged the location of a briefly presented target, placed either on the textured ground or ceiling surface. Judged location was more accurate for a target on the ground, provided that the ground was visible and that the observer directed attention to the lower visual field, not the upper field. This reveals that attention facilitates space perception with reference to the ground. Experiment 2 showed that judged location of a target in mid-air, with both ground and ceiling surfaces present, was more accurate when the observer directed their attention to the lower visual field; this indicates that the attention effect extends to visual space above the ground. These findings underscore the role of attention in anchoring visual orientation in space, which is arguably a primal event that enhances one’s ability to interact with objects and surface layouts within the visual space. The fact that the effect of attention was contingent on the ground being visible suggests that our terrestrial visual system is best served by its ecological niche. PMID:29177198

  20. Object preference by walking fruit flies, Drosophila melanogaster, is mediated by vision and graviperception

    PubMed Central

    Robie, Alice A.; Straw, Andrew D.; Dickinson, Michael H.

    2010-01-01

    Walking fruit flies, Drosophila melanogaster, use visual information to orient towards salient objects in their environment, presumably as a search strategy for finding food, shelter or other resources. Less is known, however, about the role of vision or other sensory modalities such as mechanoreception in the evaluation of objects once they have been reached. To study the role of vision and mechanoreception in exploration behavior, we developed a large arena in which we could track individual fruit flies as they walked through either simple or more topologically complex landscapes. When exploring a simple, flat environment lacking three-dimensional objects, flies used visual cues from the distant background to stabilize their walking trajectories. When exploring an arena containing an array of cones, differing in geometry, flies actively oriented towards, climbed onto, and explored the objects, spending most of their time on the tallest, steepest object. A fly's behavioral response to the geometry of an object depended upon the intrinsic properties of each object and not a relative assessment to other nearby objects. Furthermore, the preference was not due to a greater attraction towards tall, steep objects, but rather a change in locomotor behavior once a fly reached and explored the surface. Specifically, flies are much more likely to stop walking for long periods when they are perched on tall, steep objects. Both the vision system and the antennal chordotonal organs (Johnston's organs) provide sufficient information about the geometry of an object to elicit the observed change in locomotor behavior. Only when both these sensory systems were impaired did flies not show the behavioral preference for the tall, steep objects. PMID:20581279

  1. Localizing intracavitary brachytherapy applicators from cone-beam CT x-ray projections via a novel iterative forward projection matching algorithm.

    PubMed

    Pokhrel, Damodar; Murphy, Martin J; Todor, Dorin A; Weiss, Elisabeth; Williamson, Jeffrey F

    2011-02-01

    To present a novel method for reconstructing the 3D pose (position and orientation) of radio-opaque applicators of known but arbitrary shape from a small set of 2D x-ray projections in support of intraoperative brachytherapy planning. The generalized iterative forward projection matching (gIFPM) algorithm finds the six degree-of-freedom pose of an arbitrary rigid object by minimizing the sum-of-squared-intensity differences (SSQD) between the computed and experimentally acquired autosegmented projection of the objects. Starting with an initial estimate of the object's pose, gIFPM iteratively refines the pose parameters (3D position and three Euler angles) until the SSQD converges. The object, here specialized to a Fletcher-Weeks intracavitary brachytherapy (ICB) applicator, is represented by a fine mesh of discrete points derived from complex combinatorial geometric models of the actual applicators. Three pairs of computed and measured projection images with known imaging geometry are used. Projection images of an intrauterine tandem and colpostats were acquired from an ACUITY cone-beam CT digital simulator. An image postprocessing step was performed to create blurred binary applicators only images. To quantify gIFPM accuracy, the reconstructed 3D pose of the applicator model was forward projected and overlaid with the measured images and empirically calculated the nearest-neighbor applicator positional difference for each image pair. In the numerical simulations, the tandem and colpostats positions (x,y,z) and orientations (alpha, beta, gamma) were estimated with accuracies of 0.6 mm and 2 degrees, respectively. For experimentally acquired images of actual applicators, the residual 2D registration error was less than 1.8 mm for each image pair, corresponding to about 1 mm positioning accuracy at isocenter, with a total computation time of less than 1.5 min on a 1 GHz processor. This work describes a novel, accurate, fast, and completely automatic method to localize radio-opaque applicators of arbitrary shape from measured 2D x-ray projections. The results demonstrate approximately 1 mm accuracy while compared against the measured applicator projections. No lateral film is needed. By localizing the applicator internal structure as well as radioactive sources, the effect of intra-applicator and interapplicator attenuation can be included in the resultant dose calculations. Further validation tests using clinically acquired tandem and colpostats images will be performed for the accurate and robust applicator/sources localization in ICB patients.

  2. Localizing intracavitary brachytherapy applicators from cone-beam CT x-ray projections via a novel iterative forward projection matching algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.

    2011-02-15

    Purpose: To present a novel method for reconstructing the 3D pose (position and orientation) of radio-opaque applicators of known but arbitrary shape from a small set of 2D x-ray projections in support of intraoperative brachytherapy planning. Methods: The generalized iterative forward projection matching (gIFPM) algorithm finds the six degree-of-freedom pose of an arbitrary rigid object by minimizing the sum-of-squared-intensity differences (SSQD) between the computed and experimentally acquired autosegmented projection of the objects. Starting with an initial estimate of the object's pose, gIFPM iteratively refines the pose parameters (3D position and three Euler angles) until the SSQD converges. The object, heremore » specialized to a Fletcher-Weeks intracavitary brachytherapy (ICB) applicator, is represented by a fine mesh of discrete points derived from complex combinatorial geometric models of the actual applicators. Three pairs of computed and measured projection images with known imaging geometry are used. Projection images of an intrauterine tandem and colpostats were acquired from an ACUITY cone-beam CT digital simulator. An image postprocessing step was performed to create blurred binary applicators only images. To quantify gIFPM accuracy, the reconstructed 3D pose of the applicator model was forward projected and overlaid with the measured images and empirically calculated the nearest-neighbor applicator positional difference for each image pair. Results: In the numerical simulations, the tandem and colpostats positions (x,y,z) and orientations ({alpha},{beta},{gamma}) were estimated with accuracies of 0.6 mm and 2 deg., respectively. For experimentally acquired images of actual applicators, the residual 2D registration error was less than 1.8 mm for each image pair, corresponding to about 1 mm positioning accuracy at isocenter, with a total computation time of less than 1.5 min on a 1 GHz processor. Conclusions: This work describes a novel, accurate, fast, and completely automatic method to localize radio-opaque applicators of arbitrary shape from measured 2D x-ray projections. The results demonstrate {approx}1 mm accuracy while compared against the measured applicator projections. No lateral film is needed. By localizing the applicator internal structure as well as radioactive sources, the effect of intra-applicator and interapplicator attenuation can be included in the resultant dose calculations. Further validation tests using clinically acquired tandem and colpostats images will be performed for the accurate and robust applicator/sources localization in ICB patients.« less

  3. Metric invariance in object recognition: a review and further evidence.

    PubMed

    Cooper, E E; Biederman, I; Hummel, J E

    1992-06-01

    Phenomenologically, human shape recognition appears to be invariant with changes of orientation in depth (up to parts occlusion), position in the visual field, and size. Recent versions of template theories (e.g., Ullman, 1989; Lowe, 1987) assume that these invariances are achieved through the application of transformations such as rotation, translation, and scaling of the image so that it can be matched metrically to a stored template. Presumably, such transformations would require time for their execution. We describe recent priming experiments in which the effects of a prior brief presentation of an image on its subsequent recognition are assessed. The results of these experiments indicate that the invariance is complete: The magnitude of visual priming (as distinct from name or basic level concept priming) is not affected by a change in position, size, orientation in depth, or the particular lines and vertices present in the image, as long as representations of the same components can be activated. An implemented seven layer neural network model (Hummel & Biederman, 1992) that captures these fundamental properties of human object recognition is described. Given a line drawing of an object, the model activates a viewpoint-invariant structural description of the object, specifying its parts and their interrelations. Visual priming is interpreted as a change in the connection weights for the activation of: a) cells, termed geon feature assemblies (GFAs), that conjoin the output of units that represent invariant, independent properties of a single geon and its relations (such as its type, aspect ratio, relations to other geons), or b) a change in the connection weights by which several GFAs activate a cell representing an object.

  4. A novel visual saliency analysis model based on dynamic multiple feature combination strategy

    NASA Astrophysics Data System (ADS)

    Lv, Jing; Ye, Qi; Lv, Wen; Zhang, Libao

    2017-06-01

    The human visual system can quickly focus on a small number of salient objects. This process was known as visual saliency analysis and these salient objects are called focus of attention (FOA). The visual saliency analysis mechanism can be used to extract the salient regions and analyze saliency of object in an image, which is time-saving and can avoid unnecessary costs of computing resources. In this paper, a novel visual saliency analysis model based on dynamic multiple feature combination strategy is introduced. In the proposed model, we first generate multi-scale feature maps of intensity, color and orientation features using Gaussian pyramids and the center-surround difference. Then, we evaluate the contribution of all feature maps to the saliency map according to the area of salient regions and their average intensity, and attach different weights to different features according to their importance. Finally, we choose the largest salient region generated by the region growing method to perform the evaluation. Experimental results show that the proposed model cannot only achieve higher accuracy in saliency map computation compared with other traditional saliency analysis models, but also extract salient regions with arbitrary shapes, which is of great value for the image analysis and understanding.

  5. Spatial attention improves the quality of population codes in human visual cortex.

    PubMed

    Saproo, Sameer; Serences, John T

    2010-08-01

    Selective attention enables sensory input from behaviorally relevant stimuli to be processed in greater detail, so that these stimuli can more accurately influence thoughts, actions, and future goals. Attention has been shown to modulate the spiking activity of single feature-selective neurons that encode basic stimulus properties (color, orientation, etc.). However, the combined output from many such neurons is required to form stable representations of relevant objects and little empirical work has formally investigated the relationship between attentional modulations on population responses and improvements in encoding precision. Here, we used functional MRI and voxel-based feature tuning functions to show that spatial attention induces a multiplicative scaling in orientation-selective population response profiles in early visual cortex. In turn, this multiplicative scaling correlates with an improvement in encoding precision, as evidenced by a concurrent increase in the mutual information between population responses and the orientation of attended stimuli. These data therefore demonstrate how multiplicative scaling of neural responses provides at least one mechanism by which spatial attention may improve the encoding precision of population codes. Increased encoding precision in early visual areas may then enhance the speed and accuracy of perceptual decisions computed by higher-order neural mechanisms.

  6. Orientation and disorientation in aviation

    PubMed Central

    2013-01-01

    On the ground, the essential requirement to remain orientated is a largely unconscious activity. In flight, orientation requires a conscious effort by the pilot particularly when the visual environment becomes degraded and a deceptive force environment becomes the frame of reference. Furthermore, an unusual force environment can determine the apparent location of objects within a limited visual scene, sometimes with disastrous consequences. This review outlines the sources of pilot disorientation that arise from the visual and force environment of flight and their interaction. It challenges the value of the traditional illusion-based approach to the subject both to aircrew and to surveys of disorientation. Also, it questions the emphasis on the shortcomings of vestibular function as the physiological basis for disorientation. While military accidents from all causes have shown a decline, there has been no corresponding reduction in accidents involving disorientation, 85% of which are the results of unrecognised disorientation. This finding has implications for the way in which pilots are taught about disorientation in the interest of enhanced flight safety. It argues for a greater use of conventional fixed base simulators to create disorientating scenarios rather than complex motion devices to create unusual sensations. PMID:23849216

  7. Emergence of Orientation Selectivity in the Mammalian Visual Pathway

    PubMed Central

    Scholl, Benjamin; Tan, Andrew Y. Y.; Corey, Joseph

    2013-01-01

    Orientation selectivity is a property of mammalian primary visual cortex (V1) neurons, yet its emergence along the visual pathway varies across species. In carnivores and primates, elongated receptive fields first appear in V1, whereas in lagomorphs such receptive fields emerge earlier, in the retina. Here we examine the mouse visual pathway and reveal the existence of orientation selectivity in lateral geniculate nucleus (LGN) relay cells. Cortical inactivation does not reduce this orientation selectivity, indicating that cortical feedback is not its source. Orientation selectivity is similar for LGN relay cells spiking and subthreshold input to V1 neurons, suggesting that cortical orientation selectivity is inherited from the LGN in mouse. In contrast, orientation selectivity of cat LGN relay cells is small relative to subthreshold inputs onto V1 simple cells. Together, these differences show that although orientation selectivity exists in visual neurons of both rodents and carnivores, its emergence along the visual pathway, and thus its underlying neuronal circuitry, is fundamentally different. PMID:23804085

  8. Perceived orientation of a runway model in nonpilots during simulated night approaches to landing.

    DOT National Transportation Integrated Search

    1977-07-01

    Illusions due to reduced visual cues at night have long been cited as contributing to the dangerous tendency of pilots to fly too low during night landing approaches. The cue of motion parallax (a difference in rate of apparent movement of objects in...

  9. Mirror-image discrimination in the literate brain: a causal role for the left occpitotemporal cortex.

    PubMed

    Nakamura, Kimihiro; Makuuchi, Michiru; Nakajima, Yasoichi

    2014-01-01

    Previous studies show that the primate and human visual system automatically generates a common and invariant representation from a visual object image and its mirror reflection. For humans, however, this mirror-image generalization seems to be partially suppressed through literacy acquisition, since literate adults have greater difficulty in recognizing mirror images of letters than those of other visual objects. At the neural level, such category-specific effect on mirror-image processing has been associated with the left occpitotemporal cortex (L-OTC), but it remains unclear whether the apparent "inhibition" on mirror letters is mediated by suppressing mirror-image representations covertly generated from normal letter stimuli. Using transcranial magnetic stimulation (TMS), we examined how transient disruption of the L-OTC affects mirror-image recognition during a same-different judgment task, while varying the semantic category (letters and non-letter objects), identity (same or different), and orientation (same or mirror-reversed) of the first and second stimuli. We found that magnetic stimulation of the L-OTC produced a significant delay in mirror-image recognition for letter-strings but not for other objects. By contrast, this category specific impact was not observed when TMS was applied to other control sites, including the right homologous area and vertex. These results thus demonstrate a causal link between the L-OTC and mirror-image discrimination in literate people. We further suggest that left-right sensitivity for letters is not achieved by a local inhibitory mechanism in the L-OTC but probably relies on the inter-regional coupling with other orientation-sensitive occipito-parietal regions.

  10. Gravity Influences the Visual Representation of Object Tilt in Parietal Cortex

    PubMed Central

    Angelaki, Dora E.

    2014-01-01

    Sensory systems encode the environment in egocentric (e.g., eye, head, or body) reference frames, creating inherently unstable representations that shift and rotate as we move. However, it is widely speculated that the brain transforms these signals into an allocentric, gravity-centered representation of the world that is stable and independent of the observer's spatial pose. Where and how this representation may be achieved is currently unknown. Here we demonstrate that a subpopulation of neurons in the macaque caudal intraparietal area (CIP) visually encodes object tilt in nonegocentric coordinates defined relative to the gravitational vector. Neuronal responses to the tilt of a visually presented planar surface were measured with the monkey in different spatial orientations (upright and rolled left/right ear down) and then compared. This revealed a continuum of representations in which planar tilt was encoded in a gravity-centered reference frame in approximately one-tenth of the comparisons, intermediate reference frames ranging between gravity-centered and egocentric in approximately two-tenths of the comparisons, and in an egocentric reference frame in less than half of the comparisons. Altogether, almost half of the comparisons revealed a shift in the preferred tilt and/or a gain change consistent with encoding object orientation in nonegocentric coordinates. Through neural network modeling, we further show that a purely gravity-centered representation of object tilt can be achieved directly from the population activity of CIP-like units. These results suggest that area CIP may play a key role in creating a stable, allocentric representation of the environment defined relative to an “earth-vertical” direction. PMID:25339732

  11. Global-local visual biases correspond with visual-spatial orientation.

    PubMed

    Basso, Michael R; Lowery, Natasha

    2004-02-01

    Within the past decade, numerous investigations have demonstrated reliable associations of global-local visual processing biases with right and left hemisphere function, respectively (cf. Van Kleeck, 1989). Yet the relevance of these biases to other cognitive functions is not well understood. Towards this end, the present research examined the relationship between global-local visual biases and perception of visual-spatial orientation. Twenty-six women and 23 men completed a global-local judgment task (Kimchi and Palmer, 1982) and the Judgment of Line Orientation Test (JLO; Benton, Sivan, Hamsher, Varney, and Spreen, 1994), a measure of visual-spatial orientation. As expected, men had better performance on JLO. Extending previous findings, global biases were related to better visual-spatial acuity on JLO. The findings suggest that global-local biases and visual-spatial orientation may share underlying cerebral mechanisms. Implications of these findings for other visually mediated cognitive outcomes are discussed.

  12. Robot Evolutionary Localization Based on Attentive Visual Short-Term Memory

    PubMed Central

    Vega, Julio; Perdices, Eduardo; Cañas, José M.

    2013-01-01

    Cameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory to store the information gathered from a moving camera on board a robot, followed by an attention system to choose where to look with this mobile camera, and a visual localization algorithm that incorporates this visual memory. The visual memory is a collection of relevant task-oriented objects and 3D segments, and its scope is wider than the current camera field of view. The attention module takes into account the need to reobserve objects in the visual memory and the need to explore new areas. The visual memory is useful also in localization tasks, as it provides more information about robot surroundings than the current instantaneous image. This visual system is intended as underlying technology for service robot applications in real people's homes. Several experiments have been carried out, both with simulated and real Pioneer and Nao robots, to validate the system and each of its components in office scenarios. PMID:23337333

  13. Object-oriented approach to fast display of electrophysiological data under MS-windows.

    PubMed

    Marion-Poll, F

    1995-12-01

    Microcomputers provide neuroscientists an alternative to a host of laboratory equipment to record and analyze electrophysiological data. Object-oriented programming tools bring an essential link between custom needs for data acquisition and analysis with general software packages. In this paper, we outline the layout of basic objects that display and manipulate electrophysiological data files. Visual inspection of the recordings is a basic requirement of any data analysis software. We present an approach that allows flexible and fast display of large data sets. This approach involves constructing an intermediate representation of the data in order to lower the number of actual points displayed while preserving the aspect of the data. The second group of objects is related to the management of lists of data files. Typical experiments designed to test the biological activity of pharmacological products include scores of files. Data manipulation and analysis are facilitated by creating multi-document objects that include the names of all experiment files. Implementation steps of both objects are described for an MS-Windows hosted application.

  14. Inferring the direction of implied motion depends on visual awareness

    PubMed Central

    Faivre, Nathan; Koch, Christof

    2014-01-01

    Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction. PMID:24706951

  15. Inferring the direction of implied motion depends on visual awareness.

    PubMed

    Faivre, Nathan; Koch, Christof

    2014-04-04

    Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction.

  16. From Objects to Landmarks: The Function of Visual Location Information in Spatial Navigation

    PubMed Central

    Chan, Edgar; Baumann, Oliver; Bellgrove, Mark A.; Mattingley, Jason B.

    2012-01-01

    Landmarks play an important role in guiding navigational behavior. A host of studies in the last 15 years has demonstrated that environmental objects can act as landmarks for navigation in different ways. In this review, we propose a parsimonious four-part taxonomy for conceptualizing object location information during navigation. We begin by outlining object properties that appear to be important for a landmark to attain salience. We then systematically examine the different functions of objects as navigational landmarks based on previous behavioral and neuroanatomical findings in rodents and humans. Evidence is presented showing that single environmental objects can function as navigational beacons, or act as associative or orientation cues. In addition, we argue that extended surfaces or boundaries can act as landmarks by providing a frame of reference for encoding spatial information. The present review provides a concise taxonomy of the use of visual objects as landmarks in navigation and should serve as a useful reference for future research into landmark-based spatial navigation. PMID:22969737

  17. Postdictive modulation of visual orientation.

    PubMed

    Kawabe, Takahiro

    2012-01-01

    The present study investigated how visual orientation is modulated by subsequent orientation inputs. Observers were presented a near-vertical Gabor patch as a target, followed by a left- or right-tilted second Gabor patch as a distracter in the spatial vicinity of the target. The task of the observers was to judge whether the target was right- or left-tilted (Experiment 1) or whether the target was vertical or not (Supplementary experiment). The judgment was biased toward the orientation of the distracter (the postdictive modulation of visual orientation). The judgment bias peaked when the target and distracter were temporally separated by 100 ms, indicating a specific temporal mechanism for this phenomenon. However, when the visibility of the distracter was reduced via backward masking, the judgment bias disappeared. On the other hand, the low-visibility distracter could still cause a simultaneous orientation contrast, indicating that the distracter orientation is still processed in the visual system (Experiment 2). Our results suggest that the postdictive modulation of visual orientation stems from spatiotemporal integration of visual orientation on the basis of a slow feature matching process.

  18. Computational model for perception of objects and motions.

    PubMed

    Yang, WenLu; Zhang, LiQing; Ma, LiBo

    2008-06-01

    Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist 'What' and 'Where' pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives 'where', for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The computational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.

  19. Synaptic Mechanisms Generating Orientation Selectivity in the ON Pathway of the Rabbit Retina

    PubMed Central

    Venkataramani, Sowmya

    2016-01-01

    Neurons that signal the orientation of edges within the visual field have been widely studied in primary visual cortex. Much less is known about the mechanisms of orientation selectivity that arise earlier in the visual stream. Here we examine the synaptic and morphological properties of a subtype of orientation-selective ganglion cell in the rabbit retina. The receptive field has an excitatory ON center, flanked by excitatory OFF regions, a structure similar to simple cell receptive fields in primary visual cortex. Examination of the light-evoked postsynaptic currents in these ON-type orientation-selective ganglion cells (ON-OSGCs) reveals that synaptic input is mediated almost exclusively through the ON pathway. Orientation selectivity is generated by larger excitation for preferred relative to orthogonal stimuli, and conversely larger inhibition for orthogonal relative to preferred stimuli. Excitatory orientation selectivity arises in part from the morphology of the dendritic arbors. Blocking GABAA receptors reduces orientation selectivity of the inhibitory synaptic inputs and the spiking responses. Negative contrast stimuli in the flanking regions produce orientation-selective excitation in part by disinhibition of a tonic NMDA receptor-mediated input arising from ON bipolar cells. Comparison with earlier studies of OFF-type OSGCs indicates that diverse synaptic circuits have evolved in the retina to detect the orientation of edges in the visual input. SIGNIFICANCE STATEMENT A core goal for visual neuroscientists is to understand how neural circuits at each stage of the visual system extract and encode features from the visual scene. This study documents a novel type of orientation-selective ganglion cell in the retina and shows that the receptive field structure is remarkably similar to that of simple cells in primary visual cortex. However, the data indicate that, unlike in the cortex, orientation selectivity in the retina depends on the activity of inhibitory interneurons. The results further reveal the physiological basis for feature detection in the visual system, elucidate the synaptic mechanisms that generate orientation selectivity at an early stage of visual processing, and illustrate a novel role for NMDA receptors in retinal processing. PMID:26985041

  20. Synaptic Mechanisms Generating Orientation Selectivity in the ON Pathway of the Rabbit Retina.

    PubMed

    Venkataramani, Sowmya; Taylor, W Rowland

    2016-03-16

    Neurons that signal the orientation of edges within the visual field have been widely studied in primary visual cortex. Much less is known about the mechanisms of orientation selectivity that arise earlier in the visual stream. Here we examine the synaptic and morphological properties of a subtype of orientation-selective ganglion cell in the rabbit retina. The receptive field has an excitatory ON center, flanked by excitatory OFF regions, a structure similar to simple cell receptive fields in primary visual cortex. Examination of the light-evoked postsynaptic currents in these ON-type orientation-selective ganglion cells (ON-OSGCs) reveals that synaptic input is mediated almost exclusively through the ON pathway. Orientation selectivity is generated by larger excitation for preferred relative to orthogonal stimuli, and conversely larger inhibition for orthogonal relative to preferred stimuli. Excitatory orientation selectivity arises in part from the morphology of the dendritic arbors. Blocking GABAA receptors reduces orientation selectivity of the inhibitory synaptic inputs and the spiking responses. Negative contrast stimuli in the flanking regions produce orientation-selective excitation in part by disinhibition of a tonic NMDA receptor-mediated input arising from ON bipolar cells. Comparison with earlier studies of OFF-type OSGCs indicates that diverse synaptic circuits have evolved in the retina to detect the orientation of edges in the visual input. A core goal for visual neuroscientists is to understand how neural circuits at each stage of the visual system extract and encode features from the visual scene. This study documents a novel type of orientation-selective ganglion cell in the retina and shows that the receptive field structure is remarkably similar to that of simple cells in primary visual cortex. However, the data indicate that, unlike in the cortex, orientation selectivity in the retina depends on the activity of inhibitory interneurons. The results further reveal the physiological basis for feature detection in the visual system, elucidate the synaptic mechanisms that generate orientation selectivity at an early stage of visual processing, and illustrate a novel role for NMDA receptors in retinal processing. Copyright © 2016 the authors 0270-6474/16/363336-14$15.00/0.

  1. Visual search accelerates during adolescence.

    PubMed

    Burggraaf, Rudolf; van der Geest, Jos N; Frens, Maarten A; Hooge, Ignace T C

    2018-05-01

    We studied changes in visual-search performance and behavior during adolescence. Search performance was analyzed in terms of reaction time and response accuracy. Search behavior was analyzed in terms of the objects fixated and the duration of these fixations. A large group of adolescents (N = 140; age: 12-19 years; 47% female, 53% male) participated in a visual-search experiment in which their eye movements were recorded with an eye tracker. The experiment consisted of 144 trials (50% with a target present), and participants had to decide whether a target was present. Each trial showed a search display with 36 Gabor patches placed on a hexagonal grid. The target was a vertically oriented element with a high spatial frequency. Nontargets differed from the target in spatial frequency, orientation, or both. Search performance and behavior changed during adolescence; with increasing age, fixation duration and reaction time decreased. Response accuracy, number of fixations, and selection of elements to fixate upon did not change with age. Thus, the speed of foveal discrimination increases with age, while the efficiency of peripheral selection does not change. We conclude that the way visual information is gathered does not change during adolescence, but the processing of visual information becomes faster.

  2. Software Re-Engineering of the Human Factors Analysis and Classification System - (Maintenance Extension) Using Object Oriented Methods in a Microsoft Environment

    DTIC Science & Technology

    2001-09-01

    replication) -- all from Visual Basic and VBA . In fact, we found that the SQL Server engine actually had a plethora of options, most formidable of...2002, the new SQL Server 2000 database engine, and Microsoft Visual Basic.NET. This thesis describes our use of the Spiral Development Model to...versions of Microsoft products? Specifically, the pending release of Microsoft Office 2002, the new SQL Server 2000 database engine, and Microsoft

  3. A GUI visualization system for airborne lidar image data to reconstruct 3D city model

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiyuki; Koizumi, Kohei

    2015-10-01

    A visualization toolbox system with graphical user interfaces (GUIs) was developed for the analysis of LiDAR point cloud data, as a compound object oriented widget application in IDL (Interractive Data Language). The main features in our system include file input and output abilities, data conversion capability from ascii formatted LiDAR point cloud data to LiDAR image data whose pixel value corresponds the altitude measured by LiDAR, visualization of 2D/3D images in various processing steps and automatic reconstruction ability of 3D city model. The performance and advantages of our graphical user interface (GUI) visualization system for LiDAR data are demonstrated.

  4. Sensitivity Profile for Orientation Selectivity in the Visual Cortex of Goggle-Reared Mice

    PubMed Central

    Yoshida, Takamasa; Ozawa, Katsuya; Tanaka, Shigeru

    2012-01-01

    It has been widely accepted that ocular dominance in the responses of visual cortical neurons can change depending on visual experience in a postnatal period. However, experience-dependent plasticity for orientation selectivity, which is another important response property of visual cortical neurons, is not yet fully understood. To address this issue, using intrinsic signal imaging and two-photon calcium imaging we attempted to observe the alteration of orientation selectivity in the visual cortex of juvenile and adult mice reared with head-mounted goggles, through which animals can experience only the vertical orientation. After one week of goggle rearing, the density of neurons optimally responding to the exposed orientation increased, while that responding to unexposed orientations decreased. These changes can be interpreted as a reallocation of preferred orientations among visually responsive neurons. Our obtained sensitivity profile for orientation selectivity showed a marked peak at 5 weeks and sustained elevation at 12 weeks and later. These features indicate the existence of a critical period between 4 and 7 weeks and residual orientation plasticity in adult mice. The presence of a dip in the sensitivity profile at 10 weeks suggests that different mechanisms are involved in orientation plasticity in childhood and adulthood. PMID:22792390

  5. Predicting 2D target velocity cannot help 2D motion integration for smooth pursuit initiation.

    PubMed

    Montagnini, Anna; Spering, Miriam; Masson, Guillaume S

    2006-12-01

    Smooth pursuit eye movements reflect the temporal dynamics of bidimensional (2D) visual motion integration. When tracking a single, tilted line, initial pursuit direction is biased toward unidimensional (1D) edge motion signals, which are orthogonal to the line orientation. Over 200 ms, tracking direction is slowly corrected to finally match the 2D object motion during steady-state pursuit. We now show that repetition of line orientation and/or motion direction does not eliminate the transient tracking direction error nor change the time course of pursuit correction. Nonetheless, multiple successive presentations of a single orientation/direction condition elicit robust anticipatory pursuit eye movements that always go in the 2D object motion direction not the 1D edge motion direction. These results demonstrate that predictive signals about target motion cannot be used for an efficient integration of ambiguous velocity signals at pursuit initiation.

  6. Orientation-Cue Invariant Population Responses to Contrast-Modulated and Phase-Reversed Contour Stimuli in Macaque V1 and V2

    PubMed Central

    An, Xu; Gong, Hongliang; Yin, Jiapeng; Wang, Xiaochun; Pan, Yanxia; Zhang, Xian; Lu, Yiliang; Yang, Yupeng; Toth, Zoltan; Schiessl, Ingo; McLoughlin, Niall; Wang, Wei

    2014-01-01

    Visual scenes can be readily decomposed into a variety of oriented components, the processing of which is vital for object segregation and recognition. In primate V1 and V2, most neurons have small spatio-temporal receptive fields responding selectively to oriented luminance contours (first order), while only a subgroup of neurons signal non-luminance defined contours (second order). So how is the orientation of second-order contours represented at the population level in macaque V1 and V2? Here we compared the population responses in macaque V1 and V2 to two types of second-order contour stimuli generated either by modulation of contrast or phase reversal with those to first-order contour stimuli. Using intrinsic signal optical imaging, we found that the orientation of second-order contour stimuli was represented invariantly in the orientation columns of both macaque V1 and V2. A physiologically constrained spatio-temporal energy model of V1 and V2 neuronal populations could reproduce all the recorded population responses. These findings suggest that, at the population level, the primate early visual system processes the orientation of second-order contours initially through a linear spatio-temporal filter mechanism. Our results of population responses to different second-order contour stimuli support the idea that the orientation maps in primate V1 and V2 can be described as a spatial-temporal energy map. PMID:25188576

  7. A Research Agenda for Service-Oriented Architecture (SOA): Maintenance and Evolution of Service-Oriented Systems

    DTIC Science & Technology

    2010-03-01

    service consumers, and infrastructure. Techniques from any iterative and incremental software development methodology followed by the organiza- tion... Service -Oriented Architecture Environment (CMU/SEI-2008-TN-008). Software Engineering Institute, Carnegie Mellon University, 2008. http://www.sei.cmu.edu...Integrating Legacy Software into a Service Oriented Architecture.” Proceedings of the 10th European Conference on Software Maintenance (CSMR 2006). Bari

  8. Acquired prior knowledge modulates audiovisual integration.

    PubMed

    Van Wanrooij, Marc M; Bremen, Peter; John Van Opstal, A

    2010-05-01

    Orienting responses to audiovisual events in the environment can benefit markedly by the integration of visual and auditory spatial information. However, logically, audiovisual integration would only be considered successful for stimuli that are spatially and temporally aligned, as these would be emitted by a single object in space-time. As humans do not have prior knowledge about whether novel auditory and visual events do indeed emanate from the same object, such information needs to be extracted from a variety of sources. For example, expectation about alignment or misalignment could modulate the strength of multisensory integration. If evidence from previous trials would repeatedly favour aligned audiovisual inputs, the internal state might also assume alignment for the next trial, and hence react to a new audiovisual event as if it were aligned. To test for such a strategy, subjects oriented a head-fixed pointer as fast as possible to a visual flash that was consistently paired, though not always spatially aligned, with a co-occurring broadband sound. We varied the probability of audiovisual alignment between experiments. Reaction times were consistently lower in blocks containing only aligned audiovisual stimuli than in blocks also containing pseudorandomly presented spatially disparate stimuli. Results demonstrate dynamic updating of the subject's prior expectation of audiovisual congruency. We discuss a model of prior probability estimation to explain the results.

  9. An electrophysiological study of the object-based correspondence effect: is the effect triggered by an intended grasping action?

    PubMed

    Lien, Mei-Ching; Jardin, Elliott; Proctor, Robert W

    2013-11-01

    We examined Goslin, Dixon, Fischer, Cangelosi, and Ellis's (Psychological Science 23:152-157, 2012) claim that the object-based correspondence effect (i.e., faster keypress responses when the orientation of an object's graspable part corresponds with the response location than when it does not) is the result of object-based attention (vision-action binding). In Experiment 1, participants determined the category of a centrally located object (kitchen utensil vs. tool), as in Goslin et al.'s study. The handle orientation (left vs. right) did or did not correspond with the response location (left vs. right). We found no correspondence effect on the response times (RTs) for either category. The effect was also not evident in the P1 and N1 components of the event-related potentials, which are thought to reflect the allocation of early visual attention. This finding was replicated in Experiment 2 for centrally located objects, even when the object was presented 45 times (33 more times than in Exp. 1). Critically, the correspondence effects on RTs, P1s, and N1s emerged only when the object was presented peripherally, so that the object handle was clearly located to the left or right of fixation. Experiment 3 provided further evidence that the effect was observed only for the base-centered objects, in which the handle was clearly positioned to the left or right of center. These findings contradict those of Goslin et al. and provide no evidence that an intended grasping action modulates visual attention. Instead, the findings support the spatial-coding account of the object-based correspondence effect.

  10. Looking away from faces: influence of high-level visual processes on saccade programming.

    PubMed

    Morand, Stéphanie M; Grosbras, Marie-Hélène; Caldara, Roberto; Harvey, Monika

    2010-03-30

    Human faces capture attention more than other visual stimuli. Here we investigated whether such face-specific biases rely on automatic (involuntary) or voluntary orienting responses. To this end, we used an anti-saccade paradigm, which requires the ability to inhibit a reflexive automatic response and to generate a voluntary saccade in the opposite direction of the stimulus. To control for potential low-level confounds in the eye-movement data, we manipulated the high-level visual properties of the stimuli while normalizing their global low-level visual properties. Eye movements were recorded in 21 participants who performed either pro- or anti-saccades to a face, car, or noise pattern, randomly presented to the left or right of a fixation point. For each trial, a symbolic cue instructed the observer to generate either a pro-saccade or an anti-saccade. We report a significant increase in anti-saccade error rates for faces compared to cars and noise patterns, as well as faster pro-saccades to faces and cars in comparison to noise patterns. These results indicate that human faces induce stronger involuntary orienting responses than other visual objects, i.e., responses that are beyond the control of the observer. Importantly, this involuntary processing cannot be accounted for by global low-level visual factors.

  11. Retrospective cues based on object features improve visual working memory performance in older adults.

    PubMed

    Gilchrist, Amanda L; Duarte, Audrey; Verhaeghen, Paul

    2016-01-01

    Research with younger adults has shown that retrospective cues can be used to orient top-down attention toward relevant items in working memory. We examined whether older adults could take advantage of these cues to improve memory performance. Younger and older adults were presented with visual arrays of five colored shapes; during maintenance, participants were presented either with an informative cue based on an object feature (here, object shape or color) that would be probed, or with an uninformative, neutral cue. Although older adults were less accurate overall, both age groups benefited from the presentation of an informative, feature-based cue relative to a neutral cue. Surprisingly, we also observed differences in the effectiveness of shape versus color cues and their effects upon post-cue memory load. These results suggest that older adults can use top-down attention to remove irrelevant items from visual working memory, provided that task-relevant features function as cues.

  12. Speed skills: measuring the visual speed analyzing properties of primate MT neurons.

    PubMed

    Perrone, J A; Thiele, A

    2001-05-01

    Knowing the direction and speed of moving objects is often critical for survival. However, it is poorly understood how cortical neurons process the speed of image movement. Here we tested MT neurons using moving sine-wave gratings of different spatial and temporal frequencies, and mapped out the neurons' spatiotemporal frequency response profiles. The maps typically had oriented ridges of peak sensitivity as expected for speed-tuned neurons. The preferred speed estimate, derived from the orientation of the maps, corresponded well to the preferred speed when moving bars were presented. Thus, our data demonstrate that MT neurons are truly sensitive to the object speed. These findings indicate that MT is not only a key structure in the analysis of direction of motion and depth perception, but also in the analysis of object speed.

  13. Multisensory Integration and Internal Models for Sensing Gravity Effects in Primates

    PubMed Central

    Lacquaniti, Francesco; La Scaleia, Barbara; Maffei, Vincenzo

    2014-01-01

    Gravity is crucial for spatial perception, postural equilibrium, and movement generation. The vestibular apparatus is the main sensory system involved in monitoring gravity. Hair cells in the vestibular maculae respond to gravitoinertial forces, but they cannot distinguish between linear accelerations and changes of head orientation relative to gravity. The brain deals with this sensory ambiguity (which can cause some lethal airplane accidents) by combining several cues with the otolith signals: angular velocity signals provided by the semicircular canals, proprioceptive signals from muscles and tendons, visceral signals related to gravity, and visual signals. In particular, vision provides both static and dynamic signals about body orientation relative to the vertical, but it poorly discriminates arbitrary accelerations of moving objects. However, we are able to visually detect the specific acceleration of gravity since early infancy. This ability depends on the fact that gravity effects are stored in brain regions which integrate visual, vestibular, and neck proprioceptive signals and combine this information with an internal model of gravity effects. PMID:25061610

  14. Multisensory integration and internal models for sensing gravity effects in primates.

    PubMed

    Lacquaniti, Francesco; Bosco, Gianfranco; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka

    2014-01-01

    Gravity is crucial for spatial perception, postural equilibrium, and movement generation. The vestibular apparatus is the main sensory system involved in monitoring gravity. Hair cells in the vestibular maculae respond to gravitoinertial forces, but they cannot distinguish between linear accelerations and changes of head orientation relative to gravity. The brain deals with this sensory ambiguity (which can cause some lethal airplane accidents) by combining several cues with the otolith signals: angular velocity signals provided by the semicircular canals, proprioceptive signals from muscles and tendons, visceral signals related to gravity, and visual signals. In particular, vision provides both static and dynamic signals about body orientation relative to the vertical, but it poorly discriminates arbitrary accelerations of moving objects. However, we are able to visually detect the specific acceleration of gravity since early infancy. This ability depends on the fact that gravity effects are stored in brain regions which integrate visual, vestibular, and neck proprioceptive signals and combine this information with an internal model of gravity effects.

  15. An object-based visual attention model for robotic applications.

    PubMed

    Yu, Yuanlong; Mann, George K I; Gosine, Raymond G

    2010-10-01

    By extending integrated competition hypothesis, this paper presents an object-based visual attention model, which selects one object of interest using low-dimensional features, resulting that visual perception starts from a fast attentional selection procedure. The proposed attention model involves seven modules: learning of object representations stored in a long-term memory (LTM), preattentive processing, top-down biasing, bottom-up competition, mediation between top-down and bottom-up ways, generation of saliency maps, and perceptual completion processing. It works in two phases: learning phase and attending phase. In the learning phase, the corresponding object representation is trained statistically when one object is attended. A dual-coding object representation consisting of local and global codings is proposed. Intensity, color, and orientation features are used to build the local coding, and a contour feature is employed to constitute the global coding. In the attending phase, the model preattentively segments the visual field into discrete proto-objects using Gestalt rules at first. If a task-specific object is given, the model recalls the corresponding representation from LTM and deduces the task-relevant feature(s) to evaluate top-down biases. The mediation between automatic bottom-up competition and conscious top-down biasing is then performed to yield a location-based saliency map. By combination of location-based saliency within each proto-object, the proto-object-based saliency is evaluated. The most salient proto-object is selected for attention, and it is finally put into the perceptual completion processing module to yield a complete object region. This model has been applied into distinct tasks of robots: detection of task-specific stationary and moving objects. Experimental results under different conditions are shown to validate this model.

  16. Distributive Education Resource Supplement to the Consumer Education Curriculum Guide for Ohio.

    ERIC Educational Resources Information Center

    Ohio State Dept. of Education, Columbus. Div. of Vocational Education.

    The activities contained in the guide are designed to supplement the distributive education curriculum with information that will prepare the student to become a more informed, skillful employee and help the marketing career oriented student better visualize his customer's buying problems. Four overall objectives are stated. The guide is organized…

  17. Salience Is Only Briefly Represented: Evidence from Probe-Detection Performance

    ERIC Educational Resources Information Center

    Donk, Mieke; Soesman, Leroy

    2010-01-01

    Salient objects in the visual field tend to capture attention. The present study aimed to examine the time-course of salience effects using a probe-detection task. Eight experiments investigated how the salience of different orientation singletons affected probe reaction time as a function of stimulus onset asynchrony (SOA) between the…

  18. The Influence of Reading Expertise in Mirror-Letter Perception: Evidence from Beginning and Expert Readers

    ERIC Educational Resources Information Center

    Dunabeitia, Jon Andoni; Dimitropoulou, María; Estevez, Adelina; Carreiras, Manuel

    2013-01-01

    The visual word recognition system recruits neuronal systems originally developed for object perception which are characterized by orientation insensitivity to mirror reversals. It has been proposed that during reading acquisition beginning readers have to "unlearn" this natural tolerance to mirror reversals in order to efficiently…

  19. The Effectiveness of Screencasts and Cognitive Tools as Scaffolding for Novice Object-Oriented Programmers

    ERIC Educational Resources Information Center

    Lee, Mark J. W.; Pradhan, Sunam; Dalgarno, Barney

    2008-01-01

    Modern information technology and computer science curricula employ a variety of graphical tools and development environments to facilitate student learning of introductory programming concepts and techniques. While the provision of interactive features and the use of visualization can enhance students' understanding and assist them in grasping…

  20. Transition to intensive care nursing: establishing a starting point.

    PubMed

    Boyle, Martin; Butcher, Rand; Conyers, Vicki; Kendrick, Tina; MacNamara, Mary; Lang, Susie

    2008-11-01

    There is a shortage of intensive care (IC) nurses. A supported transition to IC nursing has been identified as a key strategy for recruitment and retention. In 2004 a discussion document relating to transition of IC nurses was presented to the New South Wales (NSW) Chief Nursing Officer (CNO). A workshop was held with key stakeholders and a Steering Group was established to develop a state-wide transition to IC nursing program. To survey orientation programs and educational resources and develop definitions, goals, learning objectives and clinical competencies relating to transition to IC nursing practice. A questionnaire and a draft document of definitions, target group, goals, learning objectives and clinical competencies for IC transition was distributed to 43 NSW IC units (ICUs). An iterative process of anonymous feedback and modification was undertaken to establish agreement on content. Responses were received from 29 units (return rate of 67%). The survey of educational resources indicated ICUs had access to educational support and there was evidence of a lack of a common standard or definition for "orientation" or "transition". The definitions, target group, goals and competency statements from the draft document were accepted with minor editorial change. Seventeen learning objectives or psychomotor skills were modified and an additional 19 were added to the draft as a result of the process. This work has established valid definitions, goals, learning objectives and clinical competencies that describe transition to intensive care nursing.

  1. OOMM--Object-Oriented Matrix Modelling: an instrument for the integration of the Brasilia Regional Health Information System.

    PubMed

    Cammarota, M; Huppes, V; Gaia, S; Degoulet, P

    1998-01-01

    The development of Health Information Systems is widely determined by the establishment of the underlying information models. An Object-Oriented Matrix Model (OOMM) is described which target is to facilitate the integration of the overall health system. The model is based on information modules named micro-databases that are structured in a three-dimensional network: planning, health structures and information systems. The modelling tool has been developed as a layer on top of a relational database system. A visual browser facilitates the development and maintenance of the information model. The modelling approach has been applied to the Brasilia University Hospital since 1991. The extension of the modelling approach to the Brasilia regional health system is considered.

  2. Getting a grip: different actions and visual guidance of the thumb and finger in precision grasping.

    PubMed

    Melmoth, Dean R; Grant, Simon

    2012-10-01

    We manipulated the visual information available for grasping to examine what is visually guided when subjects get a precision grip on a common class of object (upright cylinders). In Experiment 1, objects (2 sizes) were placed at different eccentricities to vary the relative proximity to the participant's (n = 6) body of their thumb and finger contact positions in the final grip orientations, with vision available throughout or only for movement programming. Thumb trajectories were straighter and less variable than finger paths, and the thumb normally made initial contact with the objects at a relatively invariant landing site, but consistent thumb first-contacts were disrupted without visual guidance. Finger deviations were more affected by the object's properties and increased when vision was unavailable after movement onset. In Experiment 2, participants (n = 12) grasped 'glow-in-the-dark' objects wearing different luminous gloves in which the whole hand was visible or the thumb or the index finger was selectively occluded. Grip closure times were prolonged and thumb first-contacts disrupted when subjects could not see their thumb, whereas occluding the finger resulted in wider grips at contact because this digit remained distant from the object. Results were together consistent with visual feedback guiding the thumb in the period just prior to contacting the object, with the finger more involved in opening the grip and avoiding collision with the opposite contact surface. As people can overtly fixate only one object contact point at a time, we suggest that selecting one digit for online guidance represents an optimal strategy for initial grip placement. Other grasping tasks, in which the finger appears to be used for this purpose, are discussed.

  3. Seeing without knowing: task relevance dissociates between visual awareness and recognition.

    PubMed

    Eitam, Baruch; Shoval, Roy; Yeshurun, Yaffa

    2015-03-01

    We demonstrate that task relevance dissociates between visual awareness and knowledge activation to create a state of seeing without knowing-visual awareness of familiar stimuli without recognizing them. We rely on the fact that in order to experience a Kanizsa illusion, participants must be aware of its inducers. While people can indicate the orientation of the illusory rectangle with great ease (signifying that they have consciously experienced the illusion's inducers), almost 30% of them could not report the inducers' color. Thus, people can see, in the sense of phenomenally experiencing, but not know, in the sense of recognizing what the object is or activating appropriate knowledge about it. Experiment 2 tests whether relevance-based selection operates within objects and shows that, contrary to the pattern of results found with features of different objects in our previous studies and replicated in Experiment 1, selection does not occur when both relevant and irrelevant features belong to the same object. We discuss these findings in relation to the existing theories of consciousness and to attention and inattentional blindness, and the role of cognitive load, object-based attention, and the use of self-reports as measures of awareness. © 2015 New York Academy of Sciences.

  4. Design of Restoration Method Based on Compressed Sensing and TwIST Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Fei; Piao, Yan

    2018-04-01

    In order to improve the subjective and objective quality of degraded images at low sampling rates effectively,save storage space and reduce computational complexity at the same time, this paper proposes a joint restoration algorithm of compressed sensing and two step iterative threshold shrinkage (TwIST). The algorithm applies the TwIST algorithm which used in image restoration to the compressed sensing theory. Then, a small amount of sparse high-frequency information is obtained in frequency domain. The TwIST algorithm based on compressed sensing theory is used to accurately reconstruct the high frequency image. The experimental results show that the proposed algorithm achieves better subjective visual effects and objective quality of degraded images while accurately restoring degraded images.

  5. Development of cortical orientation selectivity in the absence of visual experience with contour

    PubMed Central

    Hussain, Shaista; Weliky, Michael

    2011-01-01

    Visual cortical neurons are selective for the orientation of lines, and the full development of this selectivity requires natural visual experience after eye opening. Here we examined whether this selectivity develops without seeing lines and contours. Juvenile ferrets were reared in a dark room and visually trained by being shown a movie of flickering, sparse spots. We found that despite the lack of contour visual experience, the cortical neurons of these ferrets developed strong orientation selectivity and exhibited simple-cell receptive fields. This finding suggests that overt contour visual experience is unnecessary for the maturation of orientation selectivity and is inconsistent with the computational models that crucially require the visual inputs of lines and contours for the development of orientation selectivity. We propose that a correlation-based model supplemented with a constraint on synaptic strength dynamics is able to account for our experimental result. PMID:21753023

  6. Iterated local search algorithm for solving the orienteering problem with soft time windows.

    PubMed

    Aghezzaf, Brahim; Fahim, Hassan El

    2016-01-01

    In this paper we study the orienteering problem with time windows (OPTW) and the impact of relaxing the time windows on the profit collected by the vehicle. The way of relaxing time windows adopted in the orienteering problem with soft time windows (OPSTW) that we study in this research is a late service relaxation that allows linearly penalized late services to customers. We solve this problem heuristically by considering a hybrid iterated local search. The results of the computational study show that the proposed approach is able to achieve promising solutions on the OPTW test instances available in the literature, one new best solution is found. On the newly generated test instances of the OPSTW, the results show that the profit collected by the OPSTW is better than the profit collected by the OPTW.

  7. An actuator extension transformation for a motion simulator and an inverse transformation applying Newton-Raphson's method

    NASA Technical Reports Server (NTRS)

    Dieudonne, J. E.

    1972-01-01

    A set of equations which transform position and angular orientation of the centroid of the payload platform of a six-degree-of-freedom motion simulator into extensions of the simulator's actuators has been derived and is based on a geometrical representation of the system. An iterative scheme, Newton-Raphson's method, has been successfully used in a real time environment in the calculation of the position and angular orientation of the centroid of the payload platform when the magnitude of the actuator extensions is known. Sufficient accuracy is obtained by using only one Newton-Raphson iteration per integration step of the real time environment.

  8. Language-Mediated Visual Orienting Behavior in Low and High Literates

    PubMed Central

    Huettig, Falk; Singh, Niharika; Mishra, Ramesh Kumar

    2011-01-01

    The influence of formal literacy on spoken language-mediated visual orienting was investigated by using a simple look and listen task which resembles every day behavior. In Experiment 1, high and low literates listened to spoken sentences containing a target word (e.g., “magar,” crocodile) while at the same time looking at a visual display of four objects (a phonological competitor of the target word, e.g., “matar,” peas; a semantic competitor, e.g., “kachuwa,” turtle, and two unrelated distractors). In Experiment 2 the semantic competitor was replaced with another unrelated distractor. Both groups of participants shifted their eye gaze to the semantic competitors (Experiment 1). In both experiments high literates shifted their eye gaze toward phonological competitors as soon as phonological information became available and moved their eyes away as soon as the acoustic information mismatched. Low literates in contrast only used phonological information when semantic matches between spoken word and visual referent were not present (Experiment 2) but in contrast to high literates these phonologically mediated shifts in eye gaze were not closely time-locked to the speech input. These data provide further evidence that in high literates language-mediated shifts in overt attention are co-determined by the type of information in the visual environment, the timing of cascaded processing in the word- and object-recognition systems, and the temporal unfolding of the spoken language. Our findings indicate that low literates exhibit a similar cognitive behavior but instead of participating in a tug-of-war among multiple types of cognitive representations, word–object mapping is achieved primarily at the semantic level. If forced, for instance by a situation in which semantic matches are not present (Experiment 2), low literates may on occasion have to rely on phonological information but do so in a much less proficient manner than their highly literate counterparts. PMID:22059083

  9. Second Iteration of Photogrammetric Pipeline to Enhance the Accuracy of Image Pose Estimation

    NASA Astrophysics Data System (ADS)

    Nguyen, T. G.; Pierrot-Deseilligny, M.; Muller, J.-M.; Thom, C.

    2017-05-01

    In classical photogrammetric processing pipeline, the automatic tie point extraction plays a key role in the quality of achieved results. The image tie points are crucial to pose estimation and have a significant influence on the precision of calculated orientation parameters. Therefore, both relative and absolute orientations of the 3D model can be affected. By improving the precision of image tie point measurement, one can enhance the quality of image orientation. The quality of image tie points is under the influence of several factors such as the multiplicity, the measurement precision and the distribution in 2D images as well as in 3D scenes. In complex acquisition scenarios such as indoor applications and oblique aerial images, tie point extraction is limited while only image information can be exploited. Hence, we propose here a method which improves the precision of pose estimation in complex scenarios by adding a second iteration to the classical processing pipeline. The result of a first iteration is used as a priori information to guide the extraction of new tie points with better quality. Evaluated with multiple case studies, the proposed method shows its validity and its high potiential for precision improvement.

  10. The impact of interference on short-term memory for visual orientation.

    PubMed

    Rademaker, Rosanne L; Bloem, Ilona M; De Weerd, Peter; Sack, Alexander T

    2015-12-01

    Visual short-term memory serves as an efficient buffer for maintaining no longer directly accessible information. How robust are visual memories against interference? Memory for simple visual features has proven vulnerable to distractors containing conflicting information along the relevant stimulus dimension, leading to the idea that interacting feature-specific channels at an early stage of visual processing support memory for simple visual features. Here we showed that memory for a single randomly orientated grating was susceptible to interference from a to-be-ignored distractor grating presented midway through a 3-s delay period. Memory for the initially presented orientation became noisier when it differed from the distractor orientation, and response distributions were shifted toward the distractor orientation (by ∼3°). Interestingly, when the distractor was rendered task-relevant by making it a second memory target, memory for both retained orientations showed reduced reliability as a function of increased orientation differences between them. However, the degree to which responses to the first grating shifted toward the orientation of the task-relevant second grating was much reduced. Finally, using a dichoptic display, we demonstrated that these systematic biases caused by a consciously perceived distractor disappeared once the distractor was presented outside of participants' awareness. Together, our results show that visual short-term memory for orientation can be systematically biased by interfering information that is consciously perceived. (c) 2015 APA, all rights reserved).

  11. Onboard Robust Visual Tracking for UAVs Using a Reliable Global-Local Object Model

    PubMed Central

    Fu, Changhong; Duan, Ran; Kircali, Dogan; Kayacan, Erdal

    2016-01-01

    In this paper, we present a novel onboard robust visual algorithm for long-term arbitrary 2D and 3D object tracking using a reliable global-local object model for unmanned aerial vehicle (UAV) applications, e.g., autonomous tracking and chasing a moving target. The first main approach in this novel algorithm is the use of a global matching and local tracking approach. In other words, the algorithm initially finds feature correspondences in a way that an improved binary descriptor is developed for global feature matching and an iterative Lucas–Kanade optical flow algorithm is employed for local feature tracking. The second main module is the use of an efficient local geometric filter (LGF), which handles outlier feature correspondences based on a new forward-backward pairwise dissimilarity measure, thereby maintaining pairwise geometric consistency. In the proposed LGF module, a hierarchical agglomerative clustering, i.e., bottom-up aggregation, is applied using an effective single-link method. The third proposed module is a heuristic local outlier factor (to the best of our knowledge, it is utilized for the first time to deal with outlier features in a visual tracking application), which further maximizes the representation of the target object in which we formulate outlier feature detection as a binary classification problem with the output features of the LGF module. Extensive UAV flight experiments show that the proposed visual tracker achieves real-time frame rates of more than thirty-five frames per second on an i7 processor with 640 × 512 image resolution and outperforms the most popular state-of-the-art trackers favorably in terms of robustness, efficiency and accuracy. PMID:27589769

  12. Fragmented Perception: Slower Space-Based but Faster Object-Based Attention in Recent-Onset Psychosis with and without Schizophrenia

    PubMed Central

    Smid, Henderikus G. O. M.; Bruggeman, Richard; Martens, Sander

    2013-01-01

    Background Schizophrenia is associated with impairments of the perception of objects, but how this affects higher cognitive functions, whether this impairment is already present after recent onset of psychosis, and whether it is specific for schizophrenia related psychosis, is not clear. We therefore tested the hypothesis that because schizophrenia is associated with impaired object perception, schizophrenia patients should differ in shifting attention between objects compared to healthy controls. To test this hypothesis, a task was used that allowed us to separately observe space-based and object-based covert orienting of attention. To examine whether impairment of object-based visual attention is related to higher order cognitive functions, standard neuropsychological tests were also administered. Method Patients with recent onset psychosis and normal controls performed the attention task, in which space- and object-based attention shifts were induced by cue-target sequences that required reorienting of attention within an object, or reorienting attention between objects. Results Patients with and without schizophrenia showed slower than normal spatial attention shifts, but the object-based component of attention shifts in patients was smaller than normal. Schizophrenia was specifically associated with slowed right-to-left attention shifts. Reorienting speed was significantly correlated with verbal memory scores in controls, and with visual attention scores in patients, but not with speed-of-processing scores in either group. Conclusions deficits of object-perception and spatial attention shifting are not only associated with schizophrenia, but are common to all psychosis patients. Schizophrenia patients only differed by having abnormally slow right-to-left visual field reorienting. Deficits of object-perception and spatial attention shifting are already present after recent onset of psychosis. Studies investigating visual spatial attention should take into account the separable effects of space-based and object-based shifting of attention. Impaired reorienting in patients was related to impaired visual attention, but not to deficits of processing speed and verbal memory. PMID:23536901

  13. Fragmented perception: slower space-based but faster object-based attention in recent-onset psychosis with and without Schizophrenia.

    PubMed

    Smid, Henderikus G O M; Bruggeman, Richard; Martens, Sander

    2013-01-01

    Schizophrenia is associated with impairments of the perception of objects, but how this affects higher cognitive functions, whether this impairment is already present after recent onset of psychosis, and whether it is specific for schizophrenia related psychosis, is not clear. We therefore tested the hypothesis that because schizophrenia is associated with impaired object perception, schizophrenia patients should differ in shifting attention between objects compared to healthy controls. To test this hypothesis, a task was used that allowed us to separately observe space-based and object-based covert orienting of attention. To examine whether impairment of object-based visual attention is related to higher order cognitive functions, standard neuropsychological tests were also administered. Patients with recent onset psychosis and normal controls performed the attention task, in which space- and object-based attention shifts were induced by cue-target sequences that required reorienting of attention within an object, or reorienting attention between objects. Patients with and without schizophrenia showed slower than normal spatial attention shifts, but the object-based component of attention shifts in patients was smaller than normal. Schizophrenia was specifically associated with slowed right-to-left attention shifts. Reorienting speed was significantly correlated with verbal memory scores in controls, and with visual attention scores in patients, but not with speed-of-processing scores in either group. deficits of object-perception and spatial attention shifting are not only associated with schizophrenia, but are common to all psychosis patients. Schizophrenia patients only differed by having abnormally slow right-to-left visual field reorienting. Deficits of object-perception and spatial attention shifting are already present after recent onset of psychosis. Studies investigating visual spatial attention should take into account the separable effects of space-based and object-based shifting of attention. Impaired reorienting in patients was related to impaired visual attention, but not to deficits of processing speed and verbal memory.

  14. Stimulus factors in motion perception and spatial orientation

    NASA Technical Reports Server (NTRS)

    Post, R. B.; Johnson, C. A.

    1984-01-01

    The Malcolm horizon utilizes a large projected light stimulus Peripheral Vision Horizon Device (PVHD) as an attitude indicator in order to achieve a more compelling sense of roll than is obtained with smaller devices. The basic principle is that the larger stimulus is more similar to visibility of a real horizon during roll, and does not require fixation and attention to the degree that smaller displays do. Successful implementation of such a device requires adjustment of the parameters of the visual stimulus so that its effects on motion perception and spatial orientation are optimized. With this purpose in mind, the effects of relevant image variables on the perception of object motion, self motion and spatial orientation are reviewed.

  15. Gravity influences the visual representation of object tilt in parietal cortex.

    PubMed

    Rosenberg, Ari; Angelaki, Dora E

    2014-10-22

    Sensory systems encode the environment in egocentric (e.g., eye, head, or body) reference frames, creating inherently unstable representations that shift and rotate as we move. However, it is widely speculated that the brain transforms these signals into an allocentric, gravity-centered representation of the world that is stable and independent of the observer's spatial pose. Where and how this representation may be achieved is currently unknown. Here we demonstrate that a subpopulation of neurons in the macaque caudal intraparietal area (CIP) visually encodes object tilt in nonegocentric coordinates defined relative to the gravitational vector. Neuronal responses to the tilt of a visually presented planar surface were measured with the monkey in different spatial orientations (upright and rolled left/right ear down) and then compared. This revealed a continuum of representations in which planar tilt was encoded in a gravity-centered reference frame in approximately one-tenth of the comparisons, intermediate reference frames ranging between gravity-centered and egocentric in approximately two-tenths of the comparisons, and in an egocentric reference frame in less than half of the comparisons. Altogether, almost half of the comparisons revealed a shift in the preferred tilt and/or a gain change consistent with encoding object orientation in nonegocentric coordinates. Through neural network modeling, we further show that a purely gravity-centered representation of object tilt can be achieved directly from the population activity of CIP-like units. These results suggest that area CIP may play a key role in creating a stable, allocentric representation of the environment defined relative to an "earth-vertical" direction. Copyright © 2014 the authors 0270-6474/14/3414170-11$15.00/0.

  16. Separate processing of texture and form in the ventral stream: evidence from FMRI and visual agnosia.

    PubMed

    Cavina-Pratesi, C; Kentridge, R W; Heywood, C A; Milner, A D

    2010-02-01

    Real-life visual object recognition requires the processing of more than just geometric (shape, size, and orientation) properties. Surface properties such as color and texture are equally important, particularly for providing information about the material properties of objects. Recent neuroimaging research suggests that geometric and surface properties are dealt with separately within the lateral occipital cortex (LOC) and the collateral sulcus (CoS), respectively. Here we compared objects that differed either in aspect ratio or in surface texture only, keeping all other visual properties constant. Results on brain-intact participants confirmed that surface texture activates an area in the posterior CoS, quite distinct from the area activated by shape within LOC. We also tested 2 patients with visual object agnosia, one of whom (DF) performed well on the texture task but at chance on the shape task, whereas the other (MS) showed the converse pattern. This behavioral double dissociation was matched by a parallel neuroimaging dissociation, with activation in CoS but not LOC in patient DF and activation in LOC but not CoS in patient MS. These data provide presumptive evidence that the areas respectively activated by shape and texture play a causally necessary role in the perceptual discrimination of these features.

  17. Functional neural substrates of posterior cortical atrophy patients.

    PubMed

    Shames, H; Raz, N; Levin, Netta

    2015-07-01

    Posterior cortical atrophy (PCA) is a neurodegenerative syndrome in which the most pronounced pathologic involvement is in the occipito-parietal visual regions. Herein, we aimed to better define the cortical reflection of this unique syndrome using a thorough battery of behavioral and functional MRI (fMRI) tests. Eight PCA patients underwent extensive testing to map their visual deficits. Assessments included visual functions associated with lower and higher components of the cortical hierarchy, as well as dorsal- and ventral-related cortical functions. fMRI was performed on five patients to examine the neuronal substrate of their visual functions. The PCA patient cohort exhibited stereopsis, saccadic eye movements and higher dorsal stream-related functional impairments, including simultant perception, image orientation, figure-from-ground segregation, closure and spatial orientation. In accordance with the behavioral findings, fMRI revealed intact activation in the ventral visual regions of face and object perception while more dorsal aspects of perception, including motion and gestalt perception, revealed impaired patterns of activity. In most of the patients, there was a lack of activity in the word form area, which is known to be linked to reading disorders. Finally, there was evidence of reduced cortical representation of the peripheral visual field, corresponding to the behaviorally assessed peripheral visual deficit. The findings are discussed in the context of networks extending from parietal regions, which mediate navigationally related processing, visually guided actions, eye movement control and working memory, suggesting that damage to these networks might explain the wide range of deficits in PCA patients.

  18. View-tolerant face recognition and Hebbian learning imply mirror-symmetric neural tuning to head orientation

    PubMed Central

    Leibo, Joel Z.; Liao, Qianli; Freiwald, Winrich A.; Anselmi, Fabio; Poggio, Tomaso

    2017-01-01

    SUMMARY The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and robust against identity-preserving transformations like depth-rotations [1, 2]. Current computational models of object recognition, including recent deep learning networks, generate these properties through a hierarchy of alternating selectivity-increasing filtering and tolerance-increasing pooling operations, similar to simple-complex cells operations [3, 4, 5, 6]. Here we prove that a class of hierarchical architectures and a broad set of biologically plausible learning rules generate approximate invariance to identity-preserving transformations at the top level of the processing hierarchy. However, all past models tested failed to reproduce the most salient property of an intermediate representation of a three-level face-processing hierarchy in the brain: mirror-symmetric tuning to head orientation [7]. Here we demonstrate that one specific biologically-plausible Hebb-type learning rule generates mirror-symmetric tuning to bilaterally symmetric stimuli like faces at intermediate levels of the architecture and show why it does so. Thus the tuning properties of individual cells inside the visual stream appear to result from group properties of the stimuli they encode and to reflect the learning rules that sculpted the information-processing system within which they reside. PMID:27916522

  19. Position estimation and driving of an autonomous vehicle by monocular vision

    NASA Astrophysics Data System (ADS)

    Hanan, Jay C.; Kayathi, Pavan; Hughlett, Casey L.

    2007-04-01

    Automatic adaptive tracking in real-time for target recognition provided autonomous control of a scale model electric truck. The two-wheel drive truck was modified as an autonomous rover test-bed for vision based guidance and navigation. Methods were implemented to monitor tracking error and ensure a safe, accurate arrival at the intended science target. Some methods are situation independent relying only on the confidence error of the target recognition algorithm. Other methods take advantage of the scenario of combined motion and tracking to filter out anomalies. In either case, only a single calibrated camera was needed for position estimation. Results from real-time autonomous driving tests on the JPL simulated Mars yard are presented. Recognition error was often situation dependent. For the rover case, the background was in motion and may be characterized to provide visual cues on rover travel such as rate, pitch, roll, and distance to objects of interest or hazards. Objects in the scene may be used as landmarks, or waypoints, for such estimations. As objects are approached, their scale increases and their orientation may change. In addition, particularly on rough terrain, these orientation and scale changes may be unpredictable. Feature extraction combined with the neural network algorithm was successful in providing visual odometry in the simulated Mars environment.

  20. The effects of acute alcohol exposure on the response properties of neurons in visual cortex area 17 of cats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Bo; State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Science, Beijing 100101; Xia Jing

    Physiological and behavioral studies have demonstrated that a number of visual functions such as visual acuity, contrast sensitivity, and motion perception can be impaired by acute alcohol exposure. The orientation- and direction-selective responses of cells in primary visual cortex are thought to participate in the perception of form and motion. To investigate how orientation selectivity and direction selectivity of neurons are influenced by acute alcohol exposure in vivo, we used the extracellular single-unit recording technique to examine the response properties of neurons in primary visual cortex (A17) of adult cats. We found that alcohol reduces spontaneous activity, visual evoked unitmore » responses, the signal-to-noise ratio, and orientation selectivity of A17 cells. In addition, small but detectable changes in both the preferred orientation/direction and the bandwidth of the orientation tuning curve of strongly orientation-biased A17 cells were observed after acute alcohol administration. Our findings may provide physiological evidence for some alcohol-related deficits in visual function observed in behavioral studies.« less

  1. Spatial asymmetry in tactile sensor skin deformation aids perception of edge orientation during haptic exploration.

    PubMed

    Ponce Wong, Ruben D; Hellman, Randall B; Santos, Veronica J

    2014-01-01

    Upper-limb amputees rely primarily on visual feedback when using their prostheses to interact with others or objects in their environment. A constant reliance upon visual feedback can be mentally exhausting and does not suffice for many activities when line-of-sight is unavailable. Upper-limb amputees could greatly benefit from the ability to perceive edges, one of the most salient features of 3D shape, through touch alone. We present an approach for estimating edge orientation with respect to an artificial fingertip through haptic exploration using a multimodal tactile sensor on a robot hand. Key parameters from the tactile signals for each of four exploratory procedures were used as inputs to a support vector regression model. Edge orientation angles ranging from -90 to 90 degrees were estimated with an 85-input model having an R (2) of 0.99 and RMS error of 5.08 degrees. Electrode impedance signals provided the most useful inputs by encoding spatially asymmetric skin deformation across the entire fingertip. Interestingly, sensor regions that were not in direct contact with the stimulus provided particularly useful information. Methods described here could pave the way for semi-autonomous capabilities in prosthetic or robotic hands during haptic exploration, especially when visual feedback is unavailable.

  2. Retinal constraints on orientation specificity in cat visual cortex.

    PubMed

    Schall, J D; Vitek, D J; Leventhal, A G

    1986-03-01

    Most retinal ganglion cells (Levick and Thibos, 1982) and cortical cells (Leventhal, 1983; Leventhal et al., 1984) subserving peripheral vision respond best to stimuli that are oriented radially, i.e., like the spokes of a wheel with the area centralis at the hub. We have extended this work by comparing directly the distributions of orientations represented in topographically corresponding regions of retina and visual cortex. Both central and peripheral regions were studied. The relations between the orientations of neighboring ganglion cells and the manner in which the overrepresentation of radial orientations is accommodated in the functional architecture of visual cortex were also studied. Our results are based on an analysis of the orientations of the dendritic fields of 1296 ganglion cells throughout the retina and the preferred orientations of 1389 cells located in retinotopically corresponding regions of cortical areas 17, 18, and 19 in the cat. We find that horizontal and vertical orientations are overrepresented in regions of both retina and visual cortex subserving the central 5 degrees of vision. The distributions of the orientations of retinal ganglion cells and cortical cells subserving the horizontal, vertical, and diagonal meridians outside the area centralis differ significantly. The distribution of the preferred orientations of the S (simple) cells in areas 17, 18 and 19 subserving a given part of the retina corresponds to the distribution of the dendritic field orientations of the ganglion cells in that part of retina. The distribution of the preferred orientations of C (complex) cells with narrow receptive fields in area 17 but not C cells with wide receptive fields in areas 17, 18, or 19 subserving a given part of the retina matches the distribution of the orientations of the ganglion cells in that part of retina. The orientations of all of the alpha-cells in 5-9 mm2 patches of retina along the horizontal, vertical, and oblique meridians were determined. A comparison of the orientations of neighboring cells indicates that other than a mutual tendency to be oriented radially, ganglion cells with similar orientations are not clustered in the retina. Reconstructions of electrode penetrations into regions of visual cortex representing peripheral retina indicate that columns subserving radial orientations are wider than those subserving nonradial orientations. Our results provide evidence that the distribution of the preferred orientations of simple cells in visual cortex subserving any region of the visual field matches the distribution of the orientations of the ganglion cells subserving the same region of the visual field.(ABSTRACT TRUNCATED AT 400 WORDS)

  3. Temporal dynamics of encoding, storage and reallocation of visual working memory

    PubMed Central

    Bays, Paul M; Gorgoraptis, Nikos; Wee, Natalie; Marshall, Louise; Husain, Masud

    2012-01-01

    The process of encoding a visual scene into working memory has previously been studied using binary measures of recall. Here we examine the temporal evolution of memory resolution, based on observers’ ability to reproduce the orientations of objects presented in brief, masked displays. Recall precision was accurately described by the interaction of two independent constraints: an encoding limit that determines the maximum rate at which information can be transferred into memory, and a separate storage limit that determines the maximum fidelity with which information can be maintained. Recall variability decreased incrementally with time, consistent with a parallel encoding process in which visual information from multiple objects accumulates simultaneously in working memory. No evidence was observed for a limit on the number of items stored. Cueing one display item with a brief flash led to rapid development of a recall advantage for that item. This advantage was short-lived if the cue was simply a salient visual event, but was maintained if it indicated an object of particular relevance to the task. These cueing effects were observed even for items that had already been encoded into memory, indicating that limited memory resources can be rapidly reallocated to prioritize salient or goal-relevant information. PMID:21911739

  4. Manipulating the disengage operation of covert visual spatial attention.

    PubMed

    Danckert, J; Maruff, P

    1997-05-01

    Processes of covert visual spatial attention have been closely linked to the programming of saccadic eye movements. In particular, it has been hypothesized that the reduction in saccadic latency that occurs in the gap paradigm is due to the prior disengagement of covert visual spatial attention. This explanation has received considerable criticism. No study as yet as attempted to demonstrate a facilitation of the disengagement of attention from a covertly attended object. If such facilitation were possible, it would support the hypothesis that the predisengagement of covert attention is necessary for the generation of express saccades. In two experiments using covert orienting of visual attention tasks (COVAT), with a high probability that targets would appear contralateral to the cued location, we attempted to facilitate the disengagement of covert attention by extinguishing peripheral cues prior to the appearance of targets. We hypothesized that the gap between cue offset and target onset would facilitate disengagement of attention from a covertly attended object. For both experiments, responses to targets appearing after a gap were slower than were responses in the no-gap condition. These results suggest that the prior offset of a covertly attended object does not facilitate the disengagement of attention.

  5. Temporal dynamics of encoding, storage, and reallocation of visual working memory.

    PubMed

    Bays, Paul M; Gorgoraptis, Nikos; Wee, Natalie; Marshall, Louise; Husain, Masud

    2011-09-12

    The process of encoding a visual scene into working memory has previously been studied using binary measures of recall. Here, we examine the temporal evolution of memory resolution, based on observers' ability to reproduce the orientations of objects presented in brief, masked displays. Recall precision was accurately described by the interaction of two independent constraints: an encoding limit that determines the maximum rate at which information can be transferred into memory and a separate storage limit that determines the maximum fidelity with which information can be maintained. Recall variability decreased incrementally with time, consistent with a parallel encoding process in which visual information from multiple objects accumulates simultaneously in working memory. No evidence was observed for a limit on the number of items stored. Cuing one display item with a brief flash led to rapid development of a recall advantage for that item. This advantage was short-lived if the cue was simply a salient visual event but was maintained if it indicated an object of particular relevance to the task. These cuing effects were observed even for items that had already been encoded into memory, indicating that limited memory resources can be rapidly reallocated to prioritize salient or goal-relevant information.

  6. Automated Verification of Design Patterns with LePUS3

    NASA Technical Reports Server (NTRS)

    Nicholson, Jonathan; Gasparis, Epameinondas; Eden, Ammon H.; Kazman, Rick

    2009-01-01

    Specification and [visual] modelling languages are expected to combine strong abstraction mechanisms with rigour, scalability, and parsimony. LePUS3 is a visual, object-oriented design description language axiomatized in a decidable subset of the first-order predicate logic. We demonstrate how LePUS3 is used to formally specify a structural design pattern and prove ( verify ) whether any JavaTM 1.4 program satisfies that specification. We also show how LePUS3 specifications (charts) are composed and how they are verified fully automatically in the Two-Tier Programming Toolkit.

  7. Role of early visual cortex in trans-saccadic memory of object features.

    PubMed

    Malik, Pankhuri; Dessing, Joost C; Crawford, J Douglas

    2015-08-01

    Early visual cortex (EVC) participates in visual feature memory and the updating of remembered locations across saccades, but its role in the trans-saccadic integration of object features is unknown. We hypothesized that if EVC is involved in updating object features relative to gaze, feature memory should be disrupted when saccades remap an object representation into a simultaneously perturbed EVC site. To test this, we applied transcranial magnetic stimulation (TMS) over functional magnetic resonance imaging-localized EVC clusters corresponding to the bottom left/right visual quadrants (VQs). During experiments, these VQs were probed psychophysically by briefly presenting a central object (Gabor patch) while subjects fixated gaze to the right or left (and above). After a short memory interval, participants were required to detect the relative change in orientation of a re-presented test object at the same spatial location. Participants either sustained fixation during the memory interval (fixation task) or made a horizontal saccade that either maintained or reversed the VQ of the object (saccade task). Three TMS pulses (coinciding with the pre-, peri-, and postsaccade intervals) were applied to the left or right EVC. This had no effect when (a) fixation was maintained, (b) saccades kept the object in the same VQ, or (c) the EVC quadrant corresponding to the first object was stimulated. However, as predicted, TMS reduced performance when saccades (especially larger saccades) crossed the remembered object location and brought it into the VQ corresponding to the TMS site. This suppression effect was statistically significant for leftward saccades and followed a weaker trend for rightward saccades. These causal results are consistent with the idea that EVC is involved in the gaze-centered updating of object features for trans-saccadic memory and perception.

  8. The effects of lesions of the superior colliculus on locomotor orientation and the orienting reflex in the rat.

    PubMed

    Goodale, M A; Murison, R C

    1975-05-02

    The effects of bilateral removal of the superior colliculus or visual cortex on visually guided locomotor movements in rats performing a brightness discrimination task were investigated directly with the use of cine film. Rats with collicular lesions showed patterns of locomotion comparable to or more efficient than those of normal animals when approaching one of 5 small doors located at one end of a large open area. In contrast, animals with large but incomplete lesions of visual cortex were distinctly impaired in their visual control of approach responses to the same stimuli. On the other hand, rats with collicular damage showed no orienting reflex or evidence of distraction in the same task when novel visual or auditory stimuli were presented. However, both normal and visual-decorticate rats showed various components of the orienting reflex and disturbance in task performance when the same novel stimuli were presented. These results suggest that although the superior colliculus does not appear to be essential to the visual control of locomotor orientation, this midbrain structure might participate in the mediation of shifts in visual fixation and attention. Visual cortex, while contributing to visuospatial guidance of locomotor movements, might not play a significant role in the control and integration of the orienting reflex.

  9. Gestalten of today: early processing of visual contours and surfaces.

    PubMed

    Kovács, I

    1996-12-01

    While much is known about the specialized, parallel processing streams of low-level vision that extract primary visual cues, there is only limited knowledge about the dynamic interactions between them. How are the fragments, caught by local analyzers, assembled together to provide us with a unified percept? How are local discontinuities in texture, motion or depth evaluated with respect to object boundaries and surface properties? These questions are presented within the framework of orientation-specific spatial interactions of early vision. Key observations of psychophysics, anatomy and neurophysiology on interactions of various spatial and temporal ranges are reviewed. Aspects of the functional architecture and possible neural substrates of local orientation-specific interactions are discussed, underlining their role in the integration of information across the visual field, and particularly in contour integration. Examples are provided demonstrating that global context, such as contour closure and figure-ground assignment, affects these local interactions. It is illustrated that figure-ground assignment is realized early in visual processing, and that the pattern of early interactions also brings about an effective and sparse coding of visual shape. Finally, it is concluded that the underlying functional architecture is not only dynamic and context dependent, but the pattern of connectivity depends as much on past experience as on actual stimulation.

  10. Figure-ground modulation in awake primate thalamus.

    PubMed

    Jones, Helen E; Andolina, Ian M; Shipp, Stewart D; Adams, Daniel L; Cudeiro, Javier; Salt, Thomas E; Sillito, Adam M

    2015-06-02

    Figure-ground discrimination refers to the perception of an object, the figure, against a nondescript background. Neural mechanisms of figure-ground detection have been associated with feedback interactions between higher centers and primary visual cortex and have been held to index the effect of global analysis on local feature encoding. Here, in recordings from visual thalamus of alert primates, we demonstrate a robust enhancement of neuronal firing when the figure, as opposed to the ground, component of a motion-defined figure-ground stimulus is located over the receptive field. In this paradigm, visual stimulation of the receptive field and its near environs is identical across both conditions, suggesting the response enhancement reflects higher integrative mechanisms. It thus appears that cortical activity generating the higher-order percept of the figure is simultaneously reentered into the lowest level that is anatomically possible (the thalamus), so that the signature of the evolving representation of the figure is imprinted on the input driving it in an iterative process.

  11. Figure-ground modulation in awake primate thalamus

    PubMed Central

    Jones, Helen E.; Andolina, Ian M.; Shipp, Stewart D.; Adams, Daniel L.; Cudeiro, Javier; Salt, Thomas E.; Sillito, Adam M.

    2015-01-01

    Figure-ground discrimination refers to the perception of an object, the figure, against a nondescript background. Neural mechanisms of figure-ground detection have been associated with feedback interactions between higher centers and primary visual cortex and have been held to index the effect of global analysis on local feature encoding. Here, in recordings from visual thalamus of alert primates, we demonstrate a robust enhancement of neuronal firing when the figure, as opposed to the ground, component of a motion-defined figure-ground stimulus is located over the receptive field. In this paradigm, visual stimulation of the receptive field and its near environs is identical across both conditions, suggesting the response enhancement reflects higher integrative mechanisms. It thus appears that cortical activity generating the higher-order percept of the figure is simultaneously reentered into the lowest level that is anatomically possible (the thalamus), so that the signature of the evolving representation of the figure is imprinted on the input driving it in an iterative process. PMID:25901330

  12. A morphological basis for orientation tuning in primary visual cortex.

    PubMed

    Mooser, François; Bosking, William H; Fitzpatrick, David

    2004-08-01

    Feedforward connections are thought to be important in the generation of orientation-selective responses in visual cortex by establishing a bias in the sampling of information from regions of visual space that lie along a neuron's axis of preferred orientation. It remains unclear, however, which structural elements-dendrites or axons-are ultimately responsible for conveying this sampling bias. To explore this question, we have examined the spatial arrangement of feedforward axonal connections that link non-oriented neurons in layer 4 and orientation-selective neurons in layer 2/3 of visual cortex in the tree shrew. Target sites of labeled boutons in layer 2/3 resulting from focal injections of biocytin in layer 4 show an orientation-specific axial bias that is sufficient to confer orientation tuning to layer 2/3 neurons. We conclude that the anisotropic arrangement of axon terminals is the principal source of the orientation bias contributed by feedforward connections.

  13. Real-time physiological monitoring with distributed networks of sensors and object-oriented programming techniques

    NASA Astrophysics Data System (ADS)

    Wiesmann, William P.; Pranger, L. Alex; Bogucki, Mary S.

    1998-05-01

    Remote monitoring of physiologic data from individual high- risk workers distributed over time and space is a considerable challenge. This is often due to an inadequate capability to accurately integrate large amounts of data into usable information in real time. In this report, we have used the vertical and horizontal organization of the 'fireground' as a framework to design a distributed network of sensors. In this system, sensor output is linked through a hierarchical object oriented programing process to accurately interpret physiological data, incorporate these data into a synchronous model and relay processed data, trends and predictions to members of the fire incident command structure. There are several unique aspects to this approach. The first includes a process to account for variability in vital parameter values for each individual's normal physiologic response by including an adaptive network in each data process. This information is used by the model in an iterative process to baseline a 'normal' physiologic response to a given stress for each individual and to detect deviations that indicate dysfunction or a significant insult. The second unique capability of the system orders the information for each user including the subject, local company officers, medical personnel and the incident commanders. Information can be retrieved and used for training exercises and after action analysis. Finally this system can easily be adapted to existing communication and processing links along with incorporating the best parts of current models through the use of object oriented programming techniques. These modern software techniques are well suited to handling multiple data processes independently over time in a distributed network.

  14. The anatomy of object recognition--visual form agnosia caused by medial occipitotemporal stroke.

    PubMed

    Karnath, Hans-Otto; Rüter, Johannes; Mandler, André; Himmelbach, Marc

    2009-05-06

    The influential model on visual information processing by Milner and Goodale (1995) has suggested a dissociation between action- and perception-related processing in a dorsal versus ventral stream projection. It was inspired substantially by the observation of a double dissociation of disturbed visual action versus perception in patients with optic ataxia on the one hand and patients with visual form agnosia (VFA) on the other. Unfortunately, almost all cases with VFA reported so far suffered from inhalational intoxication, the majority with carbon monoxide (CO). Since CO induces a diffuse and widespread pattern of neuronal and white matter damage throughout the whole brain, precise conclusions from these patients with VFA on the selective role of ventral stream structures for shape and orientation perception were difficult. Here, we report patient J.S., who demonstrated VFA after a well circumscribed brain lesion due to stroke etiology. Like the famous patient D.F. with VFA after CO intoxication studied by Milner, Goodale, and coworkers (Goodale et al., 1991, 1994; Milner et al., 1991; Servos et al., 1995; Mon-Williams et al., 2001a,b; Wann et al., 2001; Westwood et al., 2002; McIntosh et al., 2004; Schenk and Milner, 2006), J.S. showed an obvious dissociation between disturbed visual perception of shape and orientation information on the one side and preserved visuomotor abilities based on the same information on the other. In both hemispheres, damage primarily affected the fusiform and the lingual gyri as well as the adjacent posterior cingulate gyrus. We conclude that these medial structures of the ventral occipitotemporal cortex are integral for the normal flow of shape and of contour information into the ventral stream system allowing to recognize objects.

  15. The origins of metamodality in visual object area LO: Bodily topographical biases and increased functional connectivity to S1

    PubMed Central

    Tal, Zohar; Geva, Ran; Amedi, Amir

    2016-01-01

    Recent evidence from blind participants suggests that visual areas are task-oriented and sensory modality input independent rather than sensory-specific to vision. Specifically, visual areas are thought to retain their functional selectivity when using non-visual inputs (touch or sound) even without having any visual experience. However, this theory is still controversial since it is not clear whether this also characterizes the sighted brain, and whether the reported results in the sighted reflect basic fundamental a-modal processes or are an epiphenomenon to a large extent. In the current study, we addressed these questions using a series of fMRI experiments aimed to explore visual cortex responses to passive touch on various body parts and the coupling between the parietal and visual cortices as manifested by functional connectivity. We show that passive touch robustly activated the object selective parts of the lateral–occipital (LO) cortex while deactivating almost all other occipital–retinotopic-areas. Furthermore, passive touch responses in the visual cortex were specific to hand and upper trunk stimulations. Psychophysiological interaction (PPI) analysis suggests that LO is functionally connected to the hand area in the primary somatosensory homunculus (S1), during hand and shoulder stimulations but not to any of the other body parts. We suggest that LO is a fundamental hub that serves as a node between visual-object selective areas and S1 hand representation, probably due to the critical evolutionary role of touch in object recognition and manipulation. These results might also point to a more general principle suggesting that recruitment or deactivation of the visual cortex by other sensory input depends on the ecological relevance of the information conveyed by this input to the task/computations carried out by each area or network. This is likely to rely on the unique and differential pattern of connectivity for each visual area with the rest of the brain. PMID:26673114

  16. Tensor Toolbox for MATLAB v. 3.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kola, Tamara; Bader, Brett W.; Acar Ataman, Evrim NMN

    Tensors (also known as multidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to network analysis. The Tensor Toolbox provides classes for manipulating dense, sparse, and structured tensors using MATLAB's object-oriented features. It also provides algorithms for tensor decomposition and factorization, algorithms for computing tensor eigenvalues, and methods for visualization of results.

  17. Perceptual upright: the relative effectiveness of dynamic and static images under different gravity States.

    PubMed

    Jenkin, Michael R; Dyde, Richard T; Jenkin, Heather L; Zacher, James E; Harris, Laurence R

    2011-01-01

    The perceived direction of up depends on both gravity and visual cues to orientation. Static visual cues to orientation have been shown to be less effective in influencing the perception of upright (PU) under microgravity conditions than they are on earth (Dyde et al., 2009). Here we introduce dynamic orientation cues into the visual background to ascertain whether they might increase the effectiveness of visual cues in defining the PU under different gravity conditions. Brief periods of microgravity and hypergravity were created using parabolic flight. Observers viewed a polarized, natural scene presented at various orientations on a laptop viewed through a hood which occluded all other visual cues. The visual background was either an animated video clip in which actors moved along the visual ground plane or an individual static frame taken from the same clip. We measured the perceptual upright using the oriented character recognition test (OCHART). Dynamic visual cues significantly enhance the effectiveness of vision in determining the perceptual upright under normal gravity conditions. Strong trends were found for dynamic visual cues to produce an increase in the visual effect under both microgravity and hypergravity conditions.

  18. A Computational Study of How Orientation Bias in the Lateral Geniculate Nucleus Can Give Rise to Orientation Selectivity in Primary Visual Cortex

    PubMed Central

    Kuhlmann, Levin; Vidyasagar, Trichur R.

    2011-01-01

    Controversy remains about how orientation selectivity emerges in simple cells of the mammalian primary visual cortex. In this paper, we present a computational model of how the orientation-biased responses of cells in lateral geniculate nucleus (LGN) can contribute to the orientation selectivity in simple cells in cats. We propose that simple cells are excited by lateral geniculate fields with an orientation-bias and disynaptically inhibited by unoriented lateral geniculate fields (or biased fields pooled across orientations), both at approximately the same retinotopic co-ordinates. This interaction, combined with recurrent cortical excitation and inhibition, helps to create the sharp orientation tuning seen in simple cell responses. Along with describing orientation selectivity, the model also accounts for the spatial frequency and length–response functions in simple cells, in normal conditions as well as under the influence of the GABAA antagonist, bicuculline. In addition, the model captures the response properties of LGN and simple cells to simultaneous visual stimulation and electrical stimulation of the LGN. We show that the sharp selectivity for stimulus orientation seen in primary visual cortical cells can be achieved without the excitatory convergence of the LGN input cells with receptive fields along a line in visual space, which has been a core assumption in classical models of visual cortex. We have also simulated how the full range of orientations seen in the cortex can emerge from the activity among broadly tuned channels tuned to a limited number of optimum orientations, just as in the classical case of coding for color in trichromatic primates. PMID:22013414

  19. E-Learning Quality Assurance: A Process-Oriented Lifecycle Model

    ERIC Educational Resources Information Center

    Abdous, M'hammed

    2009-01-01

    Purpose: The purpose of this paper is to propose a process-oriented lifecycle model for ensuring quality in e-learning development and delivery. As a dynamic and iterative process, quality assurance (QA) is intertwined with the e-learning development process. Design/methodology/approach: After reviewing the existing literature, particularly…

  20. Frontal and parietal theta burst TMS impairs working memory for visual-spatial conjunctions

    PubMed Central

    Morgan, Helen M.; Jackson, Margaret C.; van Koningsbruggen, Martijn G.; Shapiro, Kimron L.; Linden, David E.J.

    2013-01-01

    In tasks that selectively probe visual or spatial working memory (WM) frontal and posterior cortical areas show a segregation, with dorsal areas preferentially involved in spatial (e.g. location) WM and ventral areas in visual (e.g. object identity) WM. In a previous fMRI study [1], we showed that right parietal cortex (PC) was more active during WM for orientation, whereas left inferior frontal gyrus (IFG) was more active during colour WM. During WM for colour-orientation conjunctions, activity in these areas was intermediate to the level of activity for the single task preferred and non-preferred information. To examine whether these specialised areas play a critical role in coordinating visual and spatial WM to perform a conjunction task, we used theta burst transcranial magnetic stimulation (TMS) to induce a functional deficit. Compared to sham stimulation, TMS to right PC or left IFG selectively impaired WM for conjunctions but not single features. This is consistent with findings from visual search paradigms, in which frontal and parietal TMS selectively affects search for conjunctions compared to single features, and with combined TMS and functional imaging work suggesting that parietal and frontal regions are functionally coupled in tasks requiring integration of visual and spatial information. Our results thus elucidate mechanisms by which the brain coordinates spatially segregated processing streams and have implications beyond the field of working memory. PMID:22483548

  1. Frontal and parietal theta burst TMS impairs working memory for visual-spatial conjunctions.

    PubMed

    Morgan, Helen M; Jackson, Margaret C; van Koningsbruggen, Martijn G; Shapiro, Kimron L; Linden, David E J

    2013-03-01

    In tasks that selectively probe visual or spatial working memory (WM) frontal and posterior cortical areas show a segregation, with dorsal areas preferentially involved in spatial (e.g. location) WM and ventral areas in visual (e.g. object identity) WM. In a previous fMRI study [1], we showed that right parietal cortex (PC) was more active during WM for orientation, whereas left inferior frontal gyrus (IFG) was more active during colour WM. During WM for colour-orientation conjunctions, activity in these areas was intermediate to the level of activity for the single task preferred and non-preferred information. To examine whether these specialised areas play a critical role in coordinating visual and spatial WM to perform a conjunction task, we used theta burst transcranial magnetic stimulation (TMS) to induce a functional deficit. Compared to sham stimulation, TMS to right PC or left IFG selectively impaired WM for conjunctions but not single features. This is consistent with findings from visual search paradigms, in which frontal and parietal TMS selectively affects search for conjunctions compared to single features, and with combined TMS and functional imaging work suggesting that parietal and frontal regions are functionally coupled in tasks requiring integration of visual and spatial information. Our results thus elucidate mechanisms by which the brain coordinates spatially segregated processing streams and have implications beyond the field of working memory. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Accounting for the phase, spatial frequency and orientation demands of the task improves metrics based on the visual Strehl ratio.

    PubMed

    Young, Laura K; Love, Gordon D; Smithson, Hannah E

    2013-09-20

    Advances in ophthalmic instrumentation have allowed high order aberrations to be measured in vivo. These measurements describe the distortions to a plane wavefront entering the eye, but not the effect they have on visual performance. One metric for predicting visual performance from a wavefront measurement uses the visual Strehl ratio, calculated in the optical transfer function (OTF) domain (VSOTF) (Thibos et al., 2004). We considered how well such a metric captures empirical measurements of the effects of defocus, coma and secondary astigmatism on letter identification and on reading. We show that predictions using the visual Strehl ratio can be significantly improved by weighting the OTF by the spatial frequency band that mediates letter identification and further improved by considering the orientation of phase and contrast changes imposed by the aberration. We additionally showed that these altered metrics compare well to a cross-correlation-based metric. We suggest a version of the visual Strehl ratio, VScombined, that incorporates primarily those phase disruptions and contrast changes that have been shown independently to affect object recognition processes. This metric compared well to VSOTF for letter identification and was the best predictor of reading performance, having a higher correlation with the data than either the VSOTF or cross-correlation-based metric. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. UML as a cell and biochemistry modeling language.

    PubMed

    Webb, Ken; White, Tony

    2005-06-01

    The systems biology community is building increasingly complex models and simulations of cells and other biological entities, and are beginning to look at alternatives to traditional representations such as those provided by ordinary differential equations (ODE). The lessons learned over the years by the software development community in designing and building increasingly complex telecommunication and other commercial real-time reactive systems, can be advantageously applied to the problems of modeling in the biology domain. Making use of the object-oriented (OO) paradigm, the unified modeling language (UML) and Real-Time Object-Oriented Modeling (ROOM) visual formalisms, and the Rational Rose RealTime (RRT) visual modeling tool, we describe a multi-step process we have used to construct top-down models of cells and cell aggregates. The simple example model described in this paper includes membranes with lipid bilayers, multiple compartments including a variable number of mitochondria, substrate molecules, enzymes with reaction rules, and metabolic pathways. We demonstrate the relevance of abstraction, reuse, objects, classes, component and inheritance hierarchies, multiplicity, visual modeling, and other current software development best practices. We show how it is possible to start with a direct diagrammatic representation of a biological structure such as a cell, using terminology familiar to biologists, and by following a process of gradually adding more and more detail, arrive at a system with structure and behavior of arbitrary complexity that can run and be observed on a computer. We discuss our CellAK (Cell Assembly Kit) approach in terms of features found in SBML, CellML, E-CELL, Gepasi, Jarnac, StochSim, Virtual Cell, and membrane computing systems.

  4. Declarative language design for interactive visualization.

    PubMed

    Heer, Jeffrey; Bostock, Michael

    2010-01-01

    We investigate the design of declarative, domain-specific languages for constructing interactive visualizations. By separating specification from execution, declarative languages can simplify development, enable unobtrusive optimization, and support retargeting across platforms. We describe the design of the Protovis specification language and its implementation within an object-oriented, statically-typed programming language (Java). We demonstrate how to support rich visualizations without requiring a toolkit-specific data model and extend Protovis to enable declarative specification of animated transitions. To support cross-platform deployment, we introduce rendering and event-handling infrastructures decoupled from the runtime platform, letting designers retarget visualization specifications (e.g., from desktop to mobile phone) with reduced effort. We also explore optimizations such as runtime compilation of visualization specifications, parallelized execution, and hardware-accelerated rendering. We present benchmark studies measuring the performance gains provided by these optimizations and compare performance to existing Java-based visualization tools, demonstrating scalability improvements exceeding an order of magnitude.

  5. The primary visual cortex in the neural circuit for visual orienting

    NASA Astrophysics Data System (ADS)

    Zhaoping, Li

    The primary visual cortex (V1) is traditionally viewed as remote from influencing brain's motor outputs. However, V1 provides the most abundant cortical inputs directly to the sensory layers of superior colliculus (SC), a midbrain structure to command visual orienting such as shifting gaze and turning heads. I will show physiological, anatomical, and behavioral data suggesting that V1 transforms visual input into a saliency map to guide a class of visual orienting that is reflexive or involuntary. In particular, V1 receives a retinotopic map of visual features, such as orientation, color, and motion direction of local visual inputs; local interactions between V1 neurons perform a local-to-global computation to arrive at a saliency map that highlights conspicuous visual locations by higher V1 responses. The conspicuous location are usually, but not always, where visual input statistics changes. The population V1 outputs to SC, which is also retinotopic, enables SC to locate, by lateral inhibition between SC neurons, the most salient location as the saccadic target. Experimental tests of this hypothesis will be shown. Variations of the neural circuit for visual orienting across animal species, with more or less V1 involvement, will be discussed. Supported by the Gatsby Charitable Foundation.

  6. Contribution of Innate Cortical Mechanisms to the Maturation of Orientation Selectivity in Parvalbumin Interneurons

    PubMed Central

    Figueroa Velez, Dario X.; Ellefsen, Kyle L.; Hathaway, Ethan R.; Carathedathu, Mathew C.

    2017-01-01

    The maturation of cortical parvalbumin-positive (PV) interneurons depends on the interaction of innate and experience-dependent factors. Dark-rearing experiments suggest that visual experience determines when broad orientation selectivity emerges in visual cortical PV interneurons. Here, using neural transplantation and in vivo calcium imaging of mouse visual cortex, we investigated whether innate mechanisms contribute to the maturation of orientation selectivity in PV interneurons. First, we confirmed earlier findings showing that broad orientation selectivity emerges in PV interneurons by 2 weeks after vision onset, ∼35 d after these cells are born. Next, we assessed the functional development of transplanted PV (tPV) interneurons. Surprisingly, 25 d after transplantation (DAT) and >2 weeks after vision onset, we found that tPV interneurons have not developed broad orientation selectivity. By 35 DAT, however, broad orientation selectivity emerges in tPV interneurons. Transplantation does not alter orientation selectivity in host interneurons, suggesting that the maturation of tPV interneurons occurs independently from their endogenous counterparts. Together, these results challenge the notion that the onset of vision solely determines when PV interneurons become broadly tuned. Our results reveal that an innate cortical mechanism contributes to the emergence of broad orientation selectivity in PV interneurons. SIGNIFICANCE STATEMENT Early visual experience and innate developmental programs interact to shape cortical circuits. Visual-deprivation experiments have suggested that the onset of visual experience determines when interneurons mature in the visual cortex. Here we used neuronal transplantation and cellular imaging of visual responses to investigate the maturation of parvalbumin-positive (PV) interneurons. Our results suggest that the emergence of broad orientation selectivity in PV interneurons is innately timed. PMID:28123018

  7. Perceptual Learning Selectively Refines Orientation Representations in Early Visual Cortex

    PubMed Central

    Jehee, Janneke F.M.; Ling, Sam; Swisher, Jascha D.; van Bergen, Ruben S.; Tong, Frank

    2013-01-01

    Although practice has long been known to improve perceptual performance, the neural basis of this improvement in humans remains unclear. Using fMRI in conjunction with a novel signal detection-based analysis, we show that extensive practice selectively enhances the neural representation of trained orientations in the human visual cortex. Twelve observers practiced discriminating small changes in the orientation of a laterally presented grating over 20 or more daily one-hour training sessions. Training on average led to a two-fold improvement in discrimination sensitivity, specific to the trained orientation and the trained location, with minimal improvement found for untrained orthogonal orientations or for orientations presented in the untrained hemifield. We measured the strength of orientation-selective responses in individual voxels in early visual areas (V1–V4) using signal detection measures, both pre- and post-training. Although the overall amplitude of the BOLD response was no greater after training, practice nonetheless specifically enhanced the neural representation of the trained orientation at the trained location. This training-specific enhancement of orientation-selective responses was observed in the primary visual cortex (V1) as well as higher extrastriate visual areas V2–V4, and moreover, reliably predicted individual differences in the behavioral effects of perceptual learning. These results demonstrate that extensive training can lead to targeted functional reorganization of the human visual cortex, refining the cortical representation of behaviorally relevant information. PMID:23175828

  8. Perceptual learning selectively refines orientation representations in early visual cortex.

    PubMed

    Jehee, Janneke F M; Ling, Sam; Swisher, Jascha D; van Bergen, Ruben S; Tong, Frank

    2012-11-21

    Although practice has long been known to improve perceptual performance, the neural basis of this improvement in humans remains unclear. Using fMRI in conjunction with a novel signal detection-based analysis, we show that extensive practice selectively enhances the neural representation of trained orientations in the human visual cortex. Twelve observers practiced discriminating small changes in the orientation of a laterally presented grating over 20 or more daily 1 h training sessions. Training on average led to a twofold improvement in discrimination sensitivity, specific to the trained orientation and the trained location, with minimal improvement found for untrained orthogonal orientations or for orientations presented in the untrained hemifield. We measured the strength of orientation-selective responses in individual voxels in early visual areas (V1-V4) using signal detection measures, both before and after training. Although the overall amplitude of the BOLD response was no greater after training, practice nonetheless specifically enhanced the neural representation of the trained orientation at the trained location. This training-specific enhancement of orientation-selective responses was observed in the primary visual cortex (V1) as well as higher extrastriate visual areas V2-V4, and moreover, reliably predicted individual differences in the behavioral effects of perceptual learning. These results demonstrate that extensive training can lead to targeted functional reorganization of the human visual cortex, refining the cortical representation of behaviorally relevant information.

  9. An evaluation of object-oriented image analysis techniques to identify motorized vehicle effects in semi-arid to arid ecosystems of the American West

    USGS Publications Warehouse

    Mladinich, C.

    2010-01-01

    Human disturbance is a leading ecosystem stressor. Human-induced modifications include transportation networks, areal disturbances due to resource extraction, and recreation activities. High-resolution imagery and object-oriented classification rather than pixel-based techniques have successfully identified roads, buildings, and other anthropogenic features. Three commercial, automated feature-extraction software packages (Visual Learning Systems' Feature Analyst, ENVI Feature Extraction, and Definiens Developer) were evaluated by comparing their ability to effectively detect the disturbed surface patterns from motorized vehicle traffic. Each package achieved overall accuracies in the 70% range, demonstrating the potential to map the surface patterns. The Definiens classification was more consistent and statistically valid. Copyright ?? 2010 by Bellwether Publishing, Ltd. All rights reserved.

  10. Auditory perception and the control of spatially coordinated action of deaf and hearing children.

    PubMed

    Savelsbergh, G J; Netelenbos, J B; Whiting, H T

    1991-03-01

    From birth onwards, auditory stimulation directs and intensifies visual orientation behaviour. In deaf children, by definition, auditory perception cannot take place and cannot, therefore, make a contribution to visual orientation to objects approaching from outside the initial field of view. In experiment 1, a difference in catching ability is demonstrated between deaf and hearing children (10-13 years of age) when the ball approached from the periphery or from outside the field of view. No differences in catching ability between the two groups occurred when the ball approached from within the field of view. A second experiment was conducted in order to determine if differences in catching ability between deaf and hearing children could be attributed to execution of slow orientating movements and/or slow reaction time as a result of the auditory loss. The deaf children showed slower reaction times. No differences were found in movement times between deaf and hearing children. Overall, the findings suggest that a lack of auditory stimulation during development can lead to deficiencies in the coordination of actions such as catching which are both spatially and temporally constrained.

  11. The neural basis of precise visual short-term memory for complex recognisable objects.

    PubMed

    Veldsman, Michele; Mitchell, Daniel J; Cusack, Rhodri

    2017-10-01

    Recent evidence suggests that visual short-term memory (VSTM) capacity estimated using simple objects, such as colours and oriented bars, may not generalise well to more naturalistic stimuli. More visual detail can be stored in VSTM when complex, recognisable objects are maintained compared to simple objects. It is not yet known if it is recognisability that enhances memory precision, nor whether maintenance of recognisable objects is achieved with the same network of brain regions supporting maintenance of simple objects. We used a novel stimulus generation method to parametrically warp photographic images along a continuum, allowing separate estimation of the precision of memory representations and the number of items retained. The stimulus generation method was also designed to create unrecognisable, though perceptually matched, stimuli, to investigate the impact of recognisability on VSTM. We adapted the widely-used change detection and continuous report paradigms for use with complex, photographic images. Across three functional magnetic resonance imaging (fMRI) experiments, we demonstrated greater precision for recognisable objects in VSTM compared to unrecognisable objects. This clear behavioural advantage was not the result of recruitment of additional brain regions, or of stronger mean activity within the core network. Representational similarity analysis revealed greater variability across item repetitions in the representations of recognisable, compared to unrecognisable complex objects. We therefore propose that a richer range of neural representations support VSTM for complex recognisable objects. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Cyclone: java-based querying and computing with Pathway/Genome databases.

    PubMed

    Le Fèvre, François; Smidtas, Serge; Schächter, Vincent

    2007-05-15

    Cyclone aims at facilitating the use of BioCyc, a collection of Pathway/Genome Databases (PGDBs). Cyclone provides a fully extensible Java Object API to analyze and visualize these data. Cyclone can read and write PGDBs, and can write its own data in the CycloneML format. This format is automatically generated from the BioCyc ontology by Cyclone itself, ensuring continued compatibility. Cyclone objects can also be stored in a relational database CycloneDB. Queries can be written in SQL, and in an intuitive and concise object-oriented query language, Hibernate Query Language (HQL). In addition, Cyclone interfaces easily with Java software including the Eclipse IDE for HQL edition, the Jung API for graph algorithms or Cytoscape for graph visualization. Cyclone is freely available under an open source license at: http://sourceforge.net/projects/nemo-cyclone. For download and installation instructions, tutorials, use cases and examples, see http://nemo-cyclone.sourceforge.net.

  13. A visual tracking method based on deep learning without online model updating

    NASA Astrophysics Data System (ADS)

    Tang, Cong; Wang, Yicheng; Feng, Yunsong; Zheng, Chao; Jin, Wei

    2018-02-01

    The paper proposes a visual tracking method based on deep learning without online model updating. In consideration of the advantages of deep learning in feature representation, deep model SSD (Single Shot Multibox Detector) is used as the object extractor in the tracking model. Simultaneously, the color histogram feature and HOG (Histogram of Oriented Gradient) feature are combined to select the tracking object. In the process of tracking, multi-scale object searching map is built to improve the detection performance of deep detection model and the tracking efficiency. In the experiment of eight respective tracking video sequences in the baseline dataset, compared with six state-of-the-art methods, the method in the paper has better robustness in the tracking challenging factors, such as deformation, scale variation, rotation variation, illumination variation, and background clutters, moreover, its general performance is better than other six tracking methods.

  14. Visualization and manipulating the image of a formal data structure (FDS)-based database

    NASA Astrophysics Data System (ADS)

    Verdiesen, Franc; de Hoop, Sylvia; Molenaar, Martien

    1994-08-01

    A vector map is a terrain representation with a vector-structured geometry. Molenaar formulated an object-oriented formal data structure for 3D single valued vector maps. This FDS is implemented in a database (Oracle). In this study we describe a methodology for visualizing a FDS-based database and manipulating the image. A data set retrieved by querying the database is converted into an import file for a drawing application. An objective of this study is that an end-user can alter and add terrain objects in the image. The drawing application creates an export file, that is compared with the import file. Differences between these files result in updating the database which involves checks on consistency. In this study Autocad is used for visualizing and manipulating the image of the data set. A computer program has been written for the data exchange and conversion between Oracle and Autocad. The data structure of the FDS is compared to the data structure of Autocad and the data of the FDS is converted into the structure of Autocad equal to the FDS.

  15. Prestimulus alpha-band power biases visual discrimination confidence, but not accuracy.

    PubMed

    Samaha, Jason; Iemi, Luca; Postle, Bradley R

    2017-09-01

    The magnitude of power in the alpha-band (8-13Hz) of the electroencephalogram (EEG) prior to the onset of a near threshold visual stimulus predicts performance. Together with other findings, this has been interpreted as evidence that alpha-band dynamics reflect cortical excitability. We reasoned, however, that non-specific changes in excitability would be expected to influence signal and noise in the same way, leaving actual discriminability unchanged. Indeed, using a two-choice orientation discrimination task, we found that discrimination accuracy was unaffected by fluctuations in prestimulus alpha power. Decision confidence, on the other hand, was strongly negatively correlated with prestimulus alpha power. This finding constitutes a clear dissociation between objective and subjective measures of visual perception as a function of prestimulus cortical excitability. This dissociation is predicted by a model where the balance of evidence supporting each choice drives objective performance but only the magnitude of evidence supporting the selected choice drives subjective reports, suggesting that human perceptual confidence can be suboptimal with respect to tracking objective accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Translucent Radiosity: Efficiently Combining Diffuse Inter-Reflection and Subsurface Scattering.

    PubMed

    Sheng, Yu; Shi, Yulong; Wang, Lili; Narasimhan, Srinivasa G

    2014-07-01

    It is hard to efficiently model the light transport in scenes with translucent objects for interactive applications. The inter-reflection between objects and their environments and the subsurface scattering through the materials intertwine to produce visual effects like color bleeding, light glows, and soft shading. Monte-Carlo based approaches have demonstrated impressive results but are computationally expensive, and faster approaches model either only inter-reflection or only subsurface scattering. In this paper, we present a simple analytic model that combines diffuse inter-reflection and isotropic subsurface scattering. Our approach extends the classical work in radiosity by including a subsurface scattering matrix that operates in conjunction with the traditional form factor matrix. This subsurface scattering matrix can be constructed using analytic, measurement-based or simulation-based models and can capture both homogeneous and heterogeneous translucencies. Using a fast iterative solution to radiosity, we demonstrate scene relighting and dynamically varying object translucencies at near interactive rates.

  17. Orientation-selective Responses in the Mouse Lateral Geniculate Nucleus

    PubMed Central

    Zhao, Xinyu; Chen, Hui; Liu, Xiaorong

    2013-01-01

    The dorsal lateral geniculate nucleus (dLGN) receives visual information from the retina and transmits it to the cortex. In this study, we made extracellular recordings in the dLGN of both anesthetized and awake mice, and found that a surprisingly high proportion of cells were selective for stimulus orientation. The orientation selectivity of dLGN cells was unchanged after silencing the visual cortex pharmacologically, indicating that it is not due to cortical feedback. The orientation tuning of some dLGN cells correlated with their elongated receptive fields, while in others orientation selectivity was observed despite the fact that their receptive fields were circular, suggesting that their retinal input might already be orientation selective. Consistently, we revealed orientation/axis-selective ganglion cells in the mouse retina using multielectrode arrays in an in vitro preparation. Furthermore, the orientation tuning of dLGN cells was largely maintained at different stimulus contrasts, which could be sufficiently explained by a simple linear feedforward model. We also compared the degree of orientation selectivity in different visual structures under the same recording condition. Compared with the dLGN, orientation selectivity is greatly improved in the visual cortex, but is similar in the superior colliculus, another major retinal target. Together, our results demonstrate prominent orientation selectivity in the mouse dLGN, which may potentially contribute to visual processing in the cortex. PMID:23904611

  18. Object selection costs in visual working memory: A diffusion model analysis of the focus of attention.

    PubMed

    Sewell, David K; Lilburn, Simon D; Smith, Philip L

    2016-11-01

    A central question in working memory research concerns the degree to which information in working memory is accessible to other cognitive processes (e.g., decision-making). Theories assuming that the focus of attention can only store a single object at a time require the focus to orient to a target representation before further processing can occur. The need to orient the focus of attention implies that single-object accounts typically predict response time costs associated with object selection even when working memory is not full (i.e., memory load is less than 4 items). For other theories that assume storage of multiple items in the focus of attention, predictions depend on specific assumptions about the way resources are allocated among items held in the focus, and how this affects the time course of retrieval of items from the focus. These broad theoretical accounts have been difficult to distinguish because conventional analyses fail to separate components of empirical response times related to decision-making from components related to selection and retrieval processes associated with accessing information in working memory. To better distinguish these response time components from one another, we analyze data from a probed visual working memory task using extensions of the diffusion decision model. Analysis of model parameters revealed that increases in memory load resulted in (a) reductions in the quality of the underlying stimulus representations in a manner consistent with a sample size model of visual working memory capacity and (b) systematic increases in the time needed to selectively access a probed representation in memory. The results are consistent with single-object theories of the focus of attention. The results are also consistent with a subset of theories that assume a multiobject focus of attention in which resource allocation diminishes both the quality and accessibility of the underlying representations. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. Neuroscience Investigations: An Overview of Studies Conducted

    NASA Technical Reports Server (NTRS)

    Reschke, Millard F.

    1999-01-01

    The neural processes that mediate human spatial orientation and adaptive changes occurring in response to the sensory rearrangement encountered during orbital flight are primarily studied through second and third order responses. In the Extended Duration Orbiter Medical Project (EDOMP) neuroscience investigations, the following were measured: (1) eye movements during acquisition of either static or moving visual targets, (2) postural and locomotor responses provoked by unexpected movement of the support surface, changes in the interaction of visual, proprioceptive, and vestibular information, changes in the major postural muscles via descending pathways, or changes in locomotor pathways, and (3) verbal reports of perceived self-orientation and self-motion which enhance and complement conclusions drawn from the analysis of oculomotor, postural, and locomotor responses. In spaceflight operations, spatial orientation can be defined as situational awareness, where crew member perception of attitude, position, or motion of the spacecraft or other objects in three-dimensional space, including orientation of one's own body, is congruent with actual physical events. Perception of spatial orientation is determined by integrating information from several sensory modalities. This involves higher levels of processing within the central nervous system that control eye movements, locomotion, and stable posture. Spaceflight operational problems occur when responses to the incorrectly perceived spatial orientation are compensatory in nature. Neuroscience investigations were conducted in conjunction with U. S. Space Shuttle flights to evaluate possible changes in the ability of an astronaut to land the Shuttle or effectively perform an emergency post-landing egress following microgravity adaptation during space flights of variable length. While the results of various sensory motor and spatial orientation tests could have an impact on future space flights, our knowledge of sensorimotor adaptation to spaceflight is limited, and the future application of effective countermeasures depends, in large part, on the results from appropriate neuroscience investigations. Therefore, the objective of the neuroscience investigations could have a negative effect on mission success. The Neuroscience Laboratory, Johnson Space Center (JSC), implemented three integrated Detailed Supplementary Objectives (DSO) designed to investigate spatial orientation and the associated compensatory responses as a part of the EDOMP. The four primary goals were (1) to establish a normative database of vestibular and associated sensory changes in response to spaceflight, (2) to determine the underlying etiology of neurovestibular and sensory motor changes associated with exposure to microgravity and the subsequent return to Earth, (3) to provide immediate feedback to spaceflight crews regarding potential countermeasures that could improve performance and safety during and after flight, and (4) to take under consideration appropriate designs for preflight, in-flight, and postflight countermeasures that could be implemented for future flights.

  20. Enhanced HMAX model with feedforward feature learning for multiclass categorization.

    PubMed

    Li, Yinlin; Wu, Wei; Zhang, Bo; Li, Fengfu

    2015-01-01

    In recent years, the interdisciplinary research between neuroscience and computer vision has promoted the development in both fields. Many biologically inspired visual models are proposed, and among them, the Hierarchical Max-pooling model (HMAX) is a feedforward model mimicking the structures and functions of V1 to posterior inferotemporal (PIT) layer of the primate visual cortex, which could generate a series of position- and scale- invariant features. However, it could be improved with attention modulation and memory processing, which are two important properties of the primate visual cortex. Thus, in this paper, based on recent biological research on the primate visual cortex, we still mimic the first 100-150 ms of visual cognition to enhance the HMAX model, which mainly focuses on the unsupervised feedforward feature learning process. The main modifications are as follows: (1) To mimic the attention modulation mechanism of V1 layer, a bottom-up saliency map is computed in the S1 layer of the HMAX model, which can support the initial feature extraction for memory processing; (2) To mimic the learning, clustering and short-term memory to long-term memory conversion abilities of V2 and IT, an unsupervised iterative clustering method is used to learn clusters with multiscale middle level patches, which are taken as long-term memory; (3) Inspired by the multiple feature encoding mode of the primate visual cortex, information including color, orientation, and spatial position are encoded in different layers of the HMAX model progressively. By adding a softmax layer at the top of the model, multiclass categorization experiments can be conducted, and the results on Caltech101 show that the enhanced model with a smaller memory size exhibits higher accuracy than the original HMAX model, and could also achieve better accuracy than other unsupervised feature learning methods in multiclass categorization task.

  1. The TRIDEC Virtual Tsunami Atlas - customized value-added simulation data products for Tsunami Early Warning generated on compute clusters

    NASA Astrophysics Data System (ADS)

    Löwe, P.; Hammitzsch, M.; Babeyko, A.; Wächter, J.

    2012-04-01

    The development of new Tsunami Early Warning Systems (TEWS) requires the modelling of spatio-temporal spreading of tsunami waves both recorded from past events and hypothetical future cases. The model results are maintained in digital repositories for use in TEWS command and control units for situation assessment once a real tsunami occurs. Thus the simulation results must be absolutely trustworthy, in a sense that the quality of these datasets is assured. This is a prerequisite as solid decision making during a crisis event and the dissemination of dependable warning messages to communities under risk will be based on them. This requires data format validity, but even more the integrity and information value of the content, being a derived value-added product derived from raw tsunami model output. Quality checking of simulation result products can be done in multiple ways, yet the visual verification of both temporal and spatial spreading characteristics for each simulation remains important. The eye of the human observer still remains an unmatched tool for the detection of irregularities. This requires the availability of convenient, human-accessible mappings of each simulation. The improvement of tsunami models necessitates the changes in many variables, including simulation end-parameters. Whenever new improved iterations of the general models or underlying spatial data are evaluated, hundreds to thousands of tsunami model results must be generated for each model iteration, each one having distinct initial parameter settings. The use of a Compute Cluster Environment (CCE) of sufficient size allows the automated generation of all tsunami-results within model iterations in little time. This is a significant improvement to linear processing on dedicated desktop machines or servers. This allows for accelerated/improved visual quality checking iterations, which in turn can provide a positive feedback into the overall model improvement iteratively. An approach to set-up and utilize the CCE has been implemented by the project Collaborative, Complex, and Critical Decision Processes in Evolving Crises (TRIDEC) funded under the European Union's FP7. TRIDEC focuses on real-time intelligent information management in Earth management. The addressed challenges include the design and implementation of a robust and scalable service infrastructure supporting the integration and utilisation of existing resources with accelerated generation of large volumes of data. These include sensor systems, geo-information repositories, simulations and data fusion tools. Additionally, TRIDEC adopts enhancements of Service Oriented Architecture (SOA) principles in terms of Event Driven Architecture (EDA) design. As a next step the implemented CCE's services to generate derived and customized simulation products are foreseen to be provided via an EDA service for on-demand processing for specific threat-parameters and to accommodate for model improvements.

  2. The SOFIA Mission Control System Software

    NASA Astrophysics Data System (ADS)

    Heiligman, G. M.; Brock, D. R.; Culp, S. D.; Decker, P. H.; Estrada, J. C.; Graybeal, J. B.; Nichols, D. M.; Paluzzi, P. R.; Sharer, P. J.; Pampell, R. J.; Papke, B. L.; Salovich, R. D.; Schlappe, S. B.; Spriestersbach, K. K.; Webb, G. L.

    1999-05-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) will be delivered with a computerized mission control system (MCS). The MCS communicates with the aircraft's flight management system and coordinates the operations of the telescope assembly, mission-specific subsystems, and the science instruments. The software for the MCS must be reliable and flexible. It must be easily usable by many teams of observers with widely differing needs, and it must support non-intrusive access for education and public outreach. The technology must be appropriate for SOFIA's 20-year lifetime. The MCS software development process is an object-oriented, use case driven approach. The process is iterative: delivery will be phased over four "builds"; each build will be the result of many iterations; and each iteration will include analysis, design, implementation, and test activities. The team is geographically distributed, coordinating its work via Web pages, teleconferences, T.120 remote collaboration, and CVS (for Internet-enabled configuration management). The MCS software architectural design is derived in part from other observatories' experience. Some important features of the MCS are: * distributed computing over several UNIX and VxWorks computers * fast throughput of time-critical data * use of third-party components, such as the Adaptive Communications Environment (ACE) and the Common Object Request Broker Architecture (CORBA) * extensive configurability via stored, editable configuration files * use of several computer languages so developers have "the right tool for the job". C++, Java, scripting languages, Interactive Data Language (from Research Systems, Int'l.), XML, and HTML will all be used in the final deliverables. This paper reports on work in progress, with the final product scheduled for delivery in 2001. This work was performed for Universities Space Research Association for NASA under contract NAS2-97001.

  3. Tomography by iterative convolution - Empirical study and application to interferometry

    NASA Technical Reports Server (NTRS)

    Vest, C. M.; Prikryl, I.

    1984-01-01

    An algorithm for computer tomography has been developed that is applicable to reconstruction from data having incomplete projections because an opaque object blocks some of the probing radiation as it passes through the object field. The algorithm is based on iteration between the object domain and the projection (Radon transform) domain. Reconstructions are computed during each iteration by the well-known convolution method. Although it is demonstrated that this algorithm does not converge, an empirically justified criterion for terminating the iteration when the most accurate estimate has been computed is presented. The algorithm has been studied by using it to reconstruct several different object fields with several different opaque regions. It also has been used to reconstruct aerodynamic density fields from interferometric data recorded in wind tunnel tests.

  4. Using an Iterative Fourier Series Approach in Determining Orbital Elements of Detached Visual Binary Stars

    NASA Astrophysics Data System (ADS)

    Tupa, Peter R.; Quirin, S.; DeLeo, G. G.; McCluskey, G. E., Jr.

    2007-12-01

    We present a modified Fourier transform approach to determine the orbital parameters of detached visual binary stars. Originally inspired by Monet (ApJ 234, 275, 1979), this new method utilizes an iterative routine of refining higher order Fourier terms in a manner consistent with Keplerian motion. In most cases, this approach is not sensitive to the starting orbital parameters in the iterative loop. In many cases we have determined orbital elements even with small fragments of orbits and noisy data, although some systems show computational instabilities. The algorithm was constructed using the MAPLE mathematical software code and tested on artificially created orbits and many real binary systems, including Gliese 22 AC, Tau 51, and BU 738. This work was supported at Lehigh University by NSF-REU grant PHY-9820301.

  5. Feature-based attentional weighting and spreading in visual working memory

    PubMed Central

    Niklaus, Marcel; Nobre, Anna C.; van Ede, Freek

    2017-01-01

    Attention can be directed at features and feature dimensions to facilitate perception. Here, we investigated whether feature-based-attention (FBA) can also dynamically weight feature-specific representations within multi-feature objects held in visual working memory (VWM). Across three experiments, participants retained coloured arrows in working memory and, during the delay, were cued to either the colour or the orientation dimension. We show that directing attention towards a feature dimension (1) improves the performance in the cued feature dimension at the expense of the uncued dimension, (2) is more efficient if directed to the same rather than to different dimensions for different objects, and (3) at least for colour, automatically spreads to the colour representation of non-attended objects in VWM. We conclude that FBA also continues to operate on VWM representations (with similar principles that govern FBA in the perceptual domain) and challenge the classical view that VWM representations are stored solely as integrated objects. PMID:28233830

  6. Retrieving self-vocalized information: An event-related potential (ERP) study on the effect of retrieval orientation.

    PubMed

    Rosburg, Timm; Johansson, Mikael; Sprondel, Volker; Mecklinger, Axel

    2014-11-18

    Retrieval orientation refers to a pre-retrieval process and conceptualizes the specific form of processing that is applied to a retrieval cue. In the current event-related potential (ERP) study, we sought to find evidence for an involvement of the auditory cortex when subjects attempt to retrieve vocalized information, and hypothesized that adopting retrieval orientation would be beneficial for retrieval accuracy. During study, participants saw object words that they subsequently vocalized or visually imagined. At test, participants had to identify object names of one study condition as targets and to reject object names of the second condition together with new items. Target category switched after half of the test trials. Behaviorally, participants responded less accurately and more slowly to targets of the vocalize condition than to targets of the imagine condition. ERPs to new items varied at a single left electrode (T7) between 500 and 800ms, indicating a moderate retrieval orientation effect in the subject group as a whole. However, whereas the effect was strongly pronounced in participants with high retrieval accuracy, it was absent in participants with low retrieval accuracy. A current source density (CSD) mapping of the retrieval orientation effect indicated a source over left temporal regions. Independently from retrieval accuracy, the ERP retrieval orientation effect was surprisingly also modulated by test order. Findings are suggestive for an involvement of the auditory cortex in retrieval attempts of vocalized information and confirm that adopting retrieval orientation is potentially beneficial for retrieval accuracy. The effects of test order on retrieval-related processes might reflect a stronger focus on the newness of items in the more difficult test condition when participants started with this condition. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Is Posner's "beam" the same as Treisman's "glue"?: On the relation between visual orienting and feature integration theory.

    PubMed

    Briand, K A; Klein, R M

    1987-05-01

    In the present study we investigated whether the visually allocated "beam" studied by Posner and others is the same visual attentional resource that performs the role of feature integration in Treisman's model. Subjects were cued to attend to a certain spatial location by a visual cue, and performance at expected and unexpected stimulus locations was compared. Subjects searched for a target letter (R) with distractor letters that either could give rise to illusory conjunctions (PQ) or could not (PB). Results from three separate experiments showed that orienting attention in response to central cues (endogenous orienting) showed similar effects for both conjunction and feature search. However, when attention was oriented with peripheral visual cues (exogenous orienting), conjunction search showed larger effects of attention than did feature search. It is suggested that the attentional systems that are oriented in response to central and peripheral cues may not be the same and that only the latter performs a role in feature integration. Possibilities for future research are discussed.

  8. Orientation-Selective Retinal Circuits in Vertebrates

    PubMed Central

    Antinucci, Paride; Hindges, Robert

    2018-01-01

    Visual information is already processed in the retina before it is transmitted to higher visual centers in the brain. This includes the extraction of salient features from visual scenes, such as motion directionality or contrast, through neurons belonging to distinct neural circuits. Some retinal neurons are tuned to the orientation of elongated visual stimuli. Such ‘orientation-selective’ neurons are present in the retinae of most, if not all, vertebrate species analyzed to date, with species-specific differences in frequency and degree of tuning. In some cases, orientation-selective neurons have very stereotyped functional and morphological properties suggesting that they represent distinct cell types. In this review, we describe the retinal cell types underlying orientation selectivity found in various vertebrate species, and highlight their commonalities and differences. In addition, we discuss recent studies that revealed the cellular, synaptic and circuit mechanisms at the basis of retinal orientation selectivity. Finally, we outline the significance of these findings in shaping our current understanding of how this fundamental neural computation is implemented in the visual systems of vertebrates. PMID:29467629

  9. Orientation-Selective Retinal Circuits in Vertebrates.

    PubMed

    Antinucci, Paride; Hindges, Robert

    2018-01-01

    Visual information is already processed in the retina before it is transmitted to higher visual centers in the brain. This includes the extraction of salient features from visual scenes, such as motion directionality or contrast, through neurons belonging to distinct neural circuits. Some retinal neurons are tuned to the orientation of elongated visual stimuli. Such 'orientation-selective' neurons are present in the retinae of most, if not all, vertebrate species analyzed to date, with species-specific differences in frequency and degree of tuning. In some cases, orientation-selective neurons have very stereotyped functional and morphological properties suggesting that they represent distinct cell types. In this review, we describe the retinal cell types underlying orientation selectivity found in various vertebrate species, and highlight their commonalities and differences. In addition, we discuss recent studies that revealed the cellular, synaptic and circuit mechanisms at the basis of retinal orientation selectivity. Finally, we outline the significance of these findings in shaping our current understanding of how this fundamental neural computation is implemented in the visual systems of vertebrates.

  10. Dynamic Pointing Triggers Shifts of Visual Attention in Young Infants

    ERIC Educational Resources Information Center

    Rohlfing, Katharina J.; Longo, Matthew R.; Bertenthal, Bennett I.

    2012-01-01

    Pointing, like eye gaze, is a deictic gesture that can be used to orient the attention of another person towards an object or an event. Previous research suggests that infants first begin to follow a pointing gesture between 10 and 13 months of age. We investigated whether sensitivity to pointing could be seen at younger ages employing a technique…

  11. Visual display aid for orbital maneuvering - Design considerations

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.; Ellis, Stephen R.

    1993-01-01

    This paper describes the development of an interactive proximity operations planning system that allows on-site planning of fuel-efficient multiburn maneuvers in a potential multispacecraft environment. Although this display system most directly assists planning by providing visual feedback to aid visualization of the trajectories and constraints, its most significant features include: (1) the use of an 'inverse dynamics' algorithm that removes control nonlinearities facing the operator, and (2) a trajectory planning technique that separates, through a 'geometric spreadsheet', the normally coupled complex problems of planning orbital maneuvers and allows solution by an iterative sequence of simple independent actions. The visual feedback of trajectory shapes and operational constraints, provided by user-transparent and continuously active background computations, allows the operator to make fast, iterative design changes that rapidly converge to fuel-efficient solutions. The planning tool provides an example of operator-assisted optimization of nonlinear cost functions.

  12. DAKOTA Design Analysis Kit for Optimization and Terascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.

    2010-02-24

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less

  13. A SCILAB Program for Computing Rotating Magnetic Compact Objects

    NASA Astrophysics Data System (ADS)

    Papasotiriou, P. J.; Geroyannis, V. S.

    We implement the so-called ``complex-plane iterative technique'' (CIT) to the computation of classical differentially rotating magnetic white dwarf and neutron star models. The program has been written in SCILAB (© INRIA-ENPC), a matrix-oriented high-level programming language, which can be downloaded free of charge from the site http://www-rocq.inria.fr/scilab. Due to the advanced capabilities of this language, the code is short and understandable. Highlights of the program are: (a) time-saving character, (b) easy use due to the built-in graphics user interface, (c) easy interfacing with Fortran via online dynamic link. We interpret our numerical results in various ways by extensively using the graphics environment of SCILAB.

  14. Three-dimensional visualization system as an aid for facial surgical planning

    NASA Astrophysics Data System (ADS)

    Barre, Sebastien; Fernandez-Maloigne, Christine; Paume, Patricia; Subrenat, Gilles

    2001-05-01

    We present an aid for facial deformities treatment. We designed a system for surgical planning and prediction of human facial aspect after maxillo-facial surgery. We study the 3D reconstruction process of the tissues involved in the simulation, starting from CT acquisitions. 3D iso-surfaces meshes of soft tissues and bone structures are built. A sparse set of still photographs is used to reconstruct a 360 degree(s) texture of the facial surface and increase its visual realism. Reconstructed objects are inserted into an object-oriented, portable and scriptable visualization software allowing the practitioner to manipulate and visualize them interactively. Several LODs (Level-Of- Details) techniques are used to ensure usability. Bone structures are separated and moved by means of cut planes matching orthognatic surgery procedures. We simulate soft tissue deformations by creating a physically-based springs model between both tissues. The new static state of the facial model is computed by minimizing the energy of the springs system to achieve equilibrium. This process is optimized by transferring informations like participation hints at vertex-level between a warped generic model and the facial mesh.

  15. BiSet: Semantic Edge Bundling with Biclusters for Sensemaking.

    PubMed

    Sun, Maoyuan; Mi, Peng; North, Chris; Ramakrishnan, Naren

    2016-01-01

    Identifying coordinated relationships is an important task in data analytics. For example, an intelligence analyst might want to discover three suspicious people who all visited the same four cities. Existing techniques that display individual relationships, such as between lists of entities, require repetitious manual selection and significant mental aggregation in cluttered visualizations to find coordinated relationships. In this paper, we present BiSet, a visual analytics technique to support interactive exploration of coordinated relationships. In BiSet, we model coordinated relationships as biclusters and algorithmically mine them from a dataset. Then, we visualize the biclusters in context as bundled edges between sets of related entities. Thus, bundles enable analysts to infer task-oriented semantic insights about potentially coordinated activities. We make bundles as first class objects and add a new layer, "in-between", to contain these bundle objects. Based on this, bundles serve to organize entities represented in lists and visually reveal their membership. Users can interact with edge bundles to organize related entities, and vice versa, for sensemaking purposes. With a usage scenario, we demonstrate how BiSet supports the exploration of coordinated relationships in text analytics.

  16. Interactions between motion and form processing in the human visual system.

    PubMed

    Mather, George; Pavan, Andrea; Bellacosa Marotti, Rosilari; Campana, Gianluca; Casco, Clara

    2013-01-01

    The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However, recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by "motion-streaks" influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus, form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS.

  17. Interactions between motion and form processing in the human visual system

    PubMed Central

    Mather, George; Pavan, Andrea; Bellacosa Marotti, Rosilari; Campana, Gianluca; Casco, Clara

    2013-01-01

    The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However, recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by “motion-streaks” influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus, form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS. PMID:23730286

  18. Visual- and Vestibular-Autonomic Influence on Short-Term Cardiovascular Regulatory Mechanisms

    NASA Technical Reports Server (NTRS)

    Mullen, Thomas J.; Ramsdell, Craig D.

    1999-01-01

    This synergy project was a one-year effort conducted cooperatively by members of the NSBRI Cardiovascular Alterations and Neurovestibular Adaptation Teams in collaboration with NASA Johnson Space Center (JSC) colleagues. The objective of this study was to evaluate visual autonomic interactions on short-term cardiovascular regulatory mechanisms. Based on established visual-vestibular and vestibular-autonomic shared neural pathways, we hypothesized that visually induced changes in orientation will trigger autonomic cardiovascular reflexes. A second objective was to compare baroreflex changes during postural changes as measured with the new Cardiovascular System Identification (CSI) technique with those measured using a neck barocuff. While the neck barocuff stimulates only the carotid baroreceptors, CSI provides a measure of overall baroreflex responsiveness. This study involved a repeated measures design with 16 healthy human subjects (8 M, 8 F) to examine cardiovascular regulatory responses during actual and virtual head-upright tilts. Baroreflex sensitivity was first evaluated with subjects in supine and upright positions during actual tilt-table testing using both neck barocuff and CSI methods. The responses to actual tilts during this first session were then compared to responses during visually induced tilt and/or rotation obtained during a second session.

  19. An object-oriented watershed management tool (QnD-VFS) to engage stakeholders in targeted implementation of filter strips in an arid surface irrigation area

    NASA Astrophysics Data System (ADS)

    Campo, M. A.; Perez-Ovilla, O.; Munoz-Carpena, R.; Kiker, G.; Ullman, J. L.

    2012-12-01

    Agricultural nonpoint source pollution cause the majority of the 1,224 different waterbodies failing to meet designated water use criteria in Washington. Although various best management practices (BMPs) are effective in mitigating agricultural pollutants, BMP placement is often haphazard and fails to address specific high-risk locations. Limited financial resources necessitate optimization of conservation efforts to meet water quality goals. Thus, there is a critical need to develop decision-making tools that target BMP implementation in order to maximize water quality protection. In addition to field parameters, it is essential to incorporate economic and social determinants in the decision-making process to encourage producer involvement. Decision-making tools that identify strategic pollution sources and integrate socio-economic factors will lead to more cost-effective water quality improvement, as well as encourage producer participation by incorporating real-world limitations. Therefore, this study examines vegetative filter strip use under different scenarios as a BMP to mitigate sediment and nutrients in the highly irrigated Yakima River Basin of central Washington. We developed QnD-VFS to integrate and visualize alternative, spatially-explicit, water management strategies and its economic impact. The QnDTM system was created as a decision education tool that incorporates management, economic, and socio- political issues in a user-friendly scenario framework. QnDTM, which incorporates elements of Multi-Criteria Decision Analysis (MCDA) and risk assessment, is written in object-oriented Java and can be deployed as a stand-alone program or a web-accessed tool. The model performs Euler numerical integration of various rate transformation and mass-balance transfer equations. The novelty of this object-oriented approach is that these differential equations are detailed in modular XML format for instantiation within the Java code. This design allows many levels of complexity to be quickly designed and rendered in QnDTM without time-consuming additions of new Java code. Thus, temporal and spatial scales used in the equations become part of model development and iteration. A salient aspect is that QnDTM links spatial components within GIS (ArcInfo Shape) files to the abiotic (e.g., climate), biotic and chemical/contaminant interactions. QnD-VFS integrates environmental, management and socio-economic/cultural factors identified through stakeholder input. Several scenarios have been studied. Thus one of the main results show that changing water management, improved irrigation, is equivalent to changing length of vegetative filter strips, with a low economic impacts for farmers. Concurrently, these interactive tools allow resource managers to identify economic and social determinants that may impede conservation efforts.

  20. Agile Task Tracking Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duke, Roger T.; Crump, Thomas Vu

    The work was created to provide a tool for the purpose of improving the management of tasks associated with Agile projects. Agile projects are typically completed in an iterative manner with many short duration tasks being performed as part of iterations. These iterations are generally referred to as sprints. The objective of this work is to create a single tool that enables sprint teams to manage all of their tasks in multiple sprints and automatically produce all standard sprint performance charts with minimum effort. The format of the printed work is designed to mimic a standard Kanban board. The workmore » is developed as a single Excel file with worksheets capable of managing up to five concurrent sprints and up to one hundred tasks. It also includes a summary worksheet providing performance information from all active sprints. There are many commercial project management systems typically designed with features desired by larger organizations with many resources managing multiple programs and projects. The audience for this work is the small organizations and Agile project teams desiring an inexpensive, simple, user-friendly, task management tool. This work uses standard readily available software, Excel, requiring minimum data entry and automatically creating summary charts and performance data. It is formatted to print out and resemble standard flip charts and provide the visuals associated with this type of work.« less

  1. Vertical visual features have a strong influence on cuttlefish camouflage.

    PubMed

    Ulmer, K M; Buresch, K C; Kossodo, M M; Mäthger, L M; Siemann, L A; Hanlon, R T

    2013-04-01

    Cuttlefish and other cephalopods use visual cues from their surroundings to adaptively change their body pattern for camouflage. Numerous previous experiments have demonstrated the influence of two-dimensional (2D) substrates (e.g., sand and gravel habitats) on camouflage, yet many marine habitats have varied three-dimensional (3D) structures among which cuttlefish camouflage from predators, including benthic predators that view cuttlefish horizontally against such 3D backgrounds. We conducted laboratory experiments, using Sepia officinalis, to test the relative influence of horizontal versus vertical visual cues on cuttlefish camouflage: 2D patterns on benthic substrates were tested versus 2D wall patterns and 3D objects with patterns. Specifically, we investigated the influence of (i) quantity and (ii) placement of high-contrast elements on a 3D object or a 2D wall, as well as (iii) the diameter and (iv) number of 3D objects with high-contrast elements on cuttlefish body pattern expression. Additionally, we tested the influence of high-contrast visual stimuli covering the entire 2D benthic substrate versus the entire 2D wall. In all experiments, visual cues presented in the vertical plane evoked the strongest body pattern response in cuttlefish. These experiments support field observations that, in some marine habitats, cuttlefish will respond to vertically oriented background features even when the preponderance of visual information in their field of view seems to be from the 2D surrounding substrate. Such choices highlight the selective decision-making that occurs in cephalopods with their adaptive camouflage capability.

  2. Role of feedforward geniculate inputs in the generation of orientation selectivity in the cat's primary visual cortex

    PubMed Central

    Viswanathan, Sivaram; Jayakumar, Jaikishan; Vidyasagar, Trichur R

    2011-01-01

    Abstract Neurones of the mammalian primary visual cortex have the remarkable property of being selective for the orientation of visual contours. It has been controversial whether the selectivity arises from intracortical mechanisms, from the pattern of afferent connectivity from lateral geniculate nucleus (LGN) to cortical cells or from the sharpening of a bias that is already present in the responses of many geniculate cells. To investigate this, we employed a variation of an electrical stimulation protocol in the LGN that has been claimed to suppress intracortical inputs and isolate the raw geniculocortical input to a striate cortical cell. Such stimulation led to a sharpening of the orientation sensitivity of geniculate cells themselves and some broadening of cortical orientation selectivity. These findings are consistent with the idea that non-specific inhibition of the signals from LGN cells which exhibit an orientation bias can generate the sharp orientation selectivity of primary visual cortical cells. This obviates the need for an excitatory convergence from geniculate cells whose receptive fields are arranged along a row in visual space as in the classical model and provides a framework for orientation sensitivity originating in the retina and getting sharpened through inhibition at higher levels of the visual pathway. PMID:21486788

  3. On the three-quarter view advantage of familiar object recognition.

    PubMed

    Nonose, Kohei; Niimi, Ryosuke; Yokosawa, Kazuhiko

    2016-11-01

    A three-quarter view, i.e., an oblique view, of familiar objects often leads to a higher subjective goodness rating when compared with other orientations. What is the source of the high goodness for oblique views? First, we confirmed that object recognition performance was also best for oblique views around 30° view, even when the foreshortening disadvantage of front- and side-views was minimized (Experiments 1 and 2). In Experiment 3, we measured subjective ratings of view goodness and two possible determinants of view goodness: familiarity of view, and subjective impression of three-dimensionality. Three-dimensionality was measured as the subjective saliency of visual depth information. The oblique views were rated best, most familiar, and as approximating greatest three-dimensionality on average; however, the cluster analyses showed that the "best" orientation systematically varied among objects. We found three clusters of objects: front-preferred objects, oblique-preferred objects, and side-preferred objects. Interestingly, recognition performance and the three-dimensionality rating were higher for oblique views irrespective of the clusters. It appears that recognition efficiency is not the major source of the three-quarter view advantage. There are multiple determinants and variability among objects. This study suggests that the classical idea that a canonical view has a unique advantage in object perception requires further discussion.

  4. A component-based software environment for visualizing large macromolecular assemblies.

    PubMed

    Sanner, Michel F

    2005-03-01

    The interactive visualization of large biological assemblies poses a number of challenging problems, including the development of multiresolution representations and new interaction methods for navigating and analyzing these complex systems. An additional challenge is the development of flexible software environments that will facilitate the integration and interoperation of computational models and techniques from a wide variety of scientific disciplines. In this paper, we present a component-based software development strategy centered on the high-level, object-oriented, interpretive programming language: Python. We present several software components, discuss their integration, and describe some of their features that are relevant to the visualization of large molecular assemblies. Several examples are given to illustrate the interoperation of these software components and the integration of structural data from a variety of experimental sources. These examples illustrate how combining visual programming with component-based software development facilitates the rapid prototyping of novel visualization tools.

  5. Perceived visual speed constrained by image segmentation

    NASA Technical Reports Server (NTRS)

    Verghese, P.; Stone, L. S.

    1996-01-01

    Little is known about how or where the visual system parses the visual scene into objects or surfaces. However, it is generally assumed that the segmentation and grouping of pieces of the image into discrete entities is due to 'later' processing stages, after the 'early' processing of the visual image by local mechanisms selective for attributes such as colour, orientation, depth, and motion. Speed perception is also thought to be mediated by early mechanisms tuned for speed. Here we show that manipulating the way in which an image is parsed changes the way in which local speed information is processed. Manipulations that cause multiple stimuli to appear as parts of a single patch degrade speed discrimination, whereas manipulations that perceptually divide a single large stimulus into parts improve discrimination. These results indicate that processes as early as speed perception may be constrained by the parsing of the visual image into discrete entities.

  6. [The development of the skin-optical perception of color and images in blind schoolchildren on an "internal visual screen"].

    PubMed

    Mizrakhi, V M; Protsiuk, R G

    2000-03-01

    In profound impairement of vision the function of colour and seen objects perception is absent, with the person being unable to orient himself in space. The uncovered sensory sensations of colour allowed their use in training the blind in recognizing the colour of paper, fabric, etc. Further study in those having become blind will, we believe, help in finding eligible people and relevant approaches toward educating the blind, which will make for development of the trainee's ability to recognize images on the "inner visual screen".

  7. Automatic motor activation in the executive control of action

    PubMed Central

    McBride, Jennifer; Boy, Frédéric; Husain, Masud; Sumner, Petroc

    2012-01-01

    Although executive control and automatic behavior have often been considered separate and distinct processes, there is strong emerging and convergent evidence that they may in fact be intricately interlinked. In this review, we draw together evidence showing that visual stimuli cause automatic and unconscious motor activation, and how this in turn has implications for executive control. We discuss object affordances, alien limb syndrome, the visual grasp reflex, subliminal priming, and subliminal triggering of attentional orienting. Consideration of these findings suggests automatic motor activation might form an intrinsic part of all behavior, rather than being categorically different from voluntary actions. PMID:22536177

  8. The invisible cues that guide king penguin chicks home: use of magnetic and acoustic cues during orientation and short-range navigation.

    PubMed

    Nesterova, Anna P; Chiffard, Jules; Couchoux, Charline; Bonadonna, Francesco

    2013-04-15

    King penguins (Aptenodytes patagonicus) live in large and densely populated colonies, where navigation can be challenging because of the presence of many conspecifics that could obstruct locally available cues. Our previous experiments demonstrated that visual cues were important but not essential for king penguin chicks' homing. The main objective of this study was to investigate the importance of non-visual cues, such as magnetic and acoustic cues, for chicks' orientation and short-range navigation. In a series of experiments, the chicks were individually displaced from the colony to an experimental arena where they were released under different conditions. In the magnetic experiments, a strong magnet was attached to the chicks' heads. Trials were conducted in daylight and at night to test the relative importance of visual and magnetic cues. Our results showed that when the geomagnetic field around the chicks was modified, their orientation in the arena and the overall ability to home was not affected. In a low sound experiment we limited the acoustic cues available to the chicks by putting ear pads over their ears, and in a loud sound experiment we provided additional acoustic cues by broadcasting colony sounds on the opposite side of the arena to the real colony. In the low sound experiment, the behavior of the chicks was not affected by the limited sound input. In the loud sound experiment, the chicks reacted strongly to the colony sound. These results suggest that king penguin chicks may use the sound of the colony while orienting towards their home.

  9. Object attributes combine additively in visual search.

    PubMed

    Pramod, R T; Arun, S P

    2016-01-01

    We perceive objects as containing a variety of attributes: local features, relations between features, internal details, and global properties. But we know little about how they combine. Here, we report a remarkably simple additive rule that governs how these diverse object attributes combine in vision. The perceived dissimilarity between two objects was accurately explained as a sum of (a) spatially tuned local contour-matching processes modulated by part decomposition; (b) differences in internal details, such as texture; (c) differences in emergent attributes, such as symmetry; and (d) differences in global properties, such as orientation or overall configuration of parts. Our results elucidate an enduring question in object vision by showing that the whole object is not a sum of its parts but a sum of its many attributes.

  10. Visuomotor sensitivity to visual information about surface orientation.

    PubMed

    Knill, David C; Kersten, Daniel

    2004-03-01

    We measured human visuomotor sensitivity to visual information about three-dimensional surface orientation by analyzing movements made to place an object on a slanted surface. We applied linear discriminant analysis to the kinematics of subjects' movements to surfaces with differing slants (angle away form the fronto-parallel) to derive visuomotor d's for discriminating surfaces differing in slant by 5 degrees. Subjects' visuomotor sensitivity to information about surface orientation was very high, with discrimination "thresholds" ranging from 2 to 3 degrees. In a first experiment, we found that subjects performed only slightly better using binocular cues alone than monocular texture cues and that they showed only weak evidence for combining the cues when both were available, suggesting that monocular cues can be just as effective in guiding motor behavior in depth as binocular cues. In a second experiment, we measured subjects' perceptual discrimination and visuomotor thresholds in equivalent stimulus conditions to decompose visuomotor sensitivity into perceptual and motor components. Subjects' visuomotor thresholds were found to be slightly greater than their perceptual thresholds for a range of memory delays, from 1 to 3 s. The data were consistent with a model in which perceptual noise increases with increasing delay between stimulus presentation and movement initiation, but motor noise remains constant. This result suggests that visuomotor and perceptual systems rely on the same visual estimates of surface slant for memory delays ranging from 1 to 3 s.

  11. Biologically Inspired Visual Model With Preliminary Cognition and Active Attention Adjustment.

    PubMed

    Qiao, Hong; Xi, Xuanyang; Li, Yinlin; Wu, Wei; Li, Fengfu

    2015-11-01

    Recently, many computational models have been proposed to simulate visual cognition process. For example, the hierarchical Max-Pooling (HMAX) model was proposed according to the hierarchical and bottom-up structure of V1 to V4 in the ventral pathway of primate visual cortex, which could achieve position- and scale-tolerant recognition. In our previous work, we have introduced memory and association into the HMAX model to simulate visual cognition process. In this paper, we improve our theoretical framework by mimicking a more elaborate structure and function of the primate visual cortex. We will mainly focus on the new formation of memory and association in visual processing under different circumstances as well as preliminary cognition and active adjustment in the inferior temporal cortex, which are absent in the HMAX model. The main contributions of this paper are: 1) in the memory and association part, we apply deep convolutional neural networks to extract various episodic features of the objects since people use different features for object recognition. Moreover, to achieve a fast and robust recognition in the retrieval and association process, different types of features are stored in separated clusters and the feature binding of the same object is stimulated in a loop discharge manner and 2) in the preliminary cognition and active adjustment part, we introduce preliminary cognition to classify different types of objects since distinct neural circuits in a human brain are used for identification of various types of objects. Furthermore, active cognition adjustment of occlusion and orientation is implemented to the model to mimic the top-down effect in human cognition process. Finally, our model is evaluated on two face databases CAS-PEAL-R1 and AR. The results demonstrate that our model exhibits its efficiency on visual recognition process with much lower memory storage requirement and a better performance compared with the traditional purely computational methods.

  12. Learning visuomotor transformations for gaze-control and grasping.

    PubMed

    Hoffmann, Heiko; Schenck, Wolfram; Möller, Ralf

    2005-08-01

    For reaching to and grasping of an object, visual information about the object must be transformed into motor or postural commands for the arm and hand. In this paper, we present a robot model for visually guided reaching and grasping. The model mimics two alternative processing pathways for grasping, which are also likely to coexist in the human brain. The first pathway directly uses the retinal activation to encode the target position. In the second pathway, a saccade controller makes the eyes (cameras) focus on the target, and the gaze direction is used instead as positional input. For both pathways, an arm controller transforms information on the target's position and orientation into an arm posture suitable for grasping. For the training of the saccade controller, we suggest a novel staged learning method which does not require a teacher that provides the necessary motor commands. The arm controller uses unsupervised learning: it is based on a density model of the sensor and the motor data. Using this density, a mapping is achieved by completing a partially given sensorimotor pattern. The controller can cope with the ambiguity in having a set of redundant arm postures for a given target. The combined model of saccade and arm controller was able to fixate and grasp an elongated object with arbitrary orientation and at arbitrary position on a table in 94% of trials.

  13. View-Tolerant Face Recognition and Hebbian Learning Imply Mirror-Symmetric Neural Tuning to Head Orientation.

    PubMed

    Leibo, Joel Z; Liao, Qianli; Anselmi, Fabio; Freiwald, Winrich A; Poggio, Tomaso

    2017-01-09

    The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and robust against identity-preserving transformations, like depth rotations [1, 2]. Current computational models of object recognition, including recent deep-learning networks, generate these properties through a hierarchy of alternating selectivity-increasing filtering and tolerance-increasing pooling operations, similar to simple-complex cells operations [3-6]. Here, we prove that a class of hierarchical architectures and a broad set of biologically plausible learning rules generate approximate invariance to identity-preserving transformations at the top level of the processing hierarchy. However, all past models tested failed to reproduce the most salient property of an intermediate representation of a three-level face-processing hierarchy in the brain: mirror-symmetric tuning to head orientation [7]. Here, we demonstrate that one specific biologically plausible Hebb-type learning rule generates mirror-symmetric tuning to bilaterally symmetric stimuli, like faces, at intermediate levels of the architecture and show why it does so. Thus, the tuning properties of individual cells inside the visual stream appear to result from group properties of the stimuli they encode and to reflect the learning rules that sculpted the information-processing system within which they reside. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Somebody's Jumping on the Floor: Incorporating Music into Orientation and Mobility for Preschoolers with Visual Impairments

    ERIC Educational Resources Information Center

    Sapp, Wendy

    2011-01-01

    Young children with visual impairments face many challenges as they learn to orient to and move through their environment, the beginnings of orientation and mobility (O&M). Children who are visually impaired must learn many concepts (such as body parts and positional words) and skills (like body movement and interpreting sensory information) to…

  15. Contributions of visual and embodied expertise to body perception.

    PubMed

    Reed, Catherine L; Nyberg, Andrew A; Grubb, Jefferson D

    2012-01-01

    Recent research has demonstrated that our perception of the human body differs from that of inanimate objects. This study investigated whether the visual perception of the human body differs from that of other animate bodies and, if so, whether that difference could be attributed to visual experience and/or embodied experience. To dissociate differential effects of these two types of expertise, inversion effects (recognition of inverted stimuli is slower and less accurate than recognition of upright stimuli) were compared for two types of bodies in postures that varied in typicality: humans in human postures (human-typical), humans in dog postures (human-atypical), dogs in dog postures (dog-typical), and dogs in human postures (dog-atypical). Inversion disrupts global configural processing. Relative changes in the size and presence of inversion effects reflect changes in visual processing. Both visual and embodiment expertise predict larger inversion effects for human over dog postures because we see humans more and we have experience producing human postures. However, our design that crosses body type and typicality leads to distinct predictions for visual and embodied experience. Visual expertise predicts an interaction between typicality and orientation: greater inversion effects should be found for typical over atypical postures regardless of body type. Alternatively, embodiment expertise predicts a body, typicality, and orientation interaction: larger inversion effects should be found for all human postures but only for atypical dog postures because humans can map their bodily experience onto these postures. Accuracy data supported embodiment expertise with the three-way interaction. However, response-time data supported contributions of visual expertise with larger inversion effects for typical over atypical postures. Thus, both types of expertise affect the visual perception of bodies.

  16. Accounting for stimulus-specific variation in precision reveals a discrete capacity limit in visual working memory

    PubMed Central

    Pratte, Michael S.; Park, Young Eun; Rademaker, Rosanne L.; Tong, Frank

    2016-01-01

    If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced “oblique effect”, with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. PMID:28004957

  17. Accounting for stimulus-specific variation in precision reveals a discrete capacity limit in visual working memory.

    PubMed

    Pratte, Michael S; Park, Young Eun; Rademaker, Rosanne L; Tong, Frank

    2017-01-01

    If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced "oblique effect," with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. Data processing and optimization system to study prospective interstate power interconnections

    NASA Astrophysics Data System (ADS)

    Podkovalnikov, Sergei; Trofimov, Ivan; Trofimov, Leonid

    2018-01-01

    The paper presents Data processing and optimization system for studying and making rational decisions on the formation of interstate electric power interconnections, with aim to increasing effectiveness of their functioning and expansion. The technologies for building and integrating a Data processing and optimization system including an object-oriented database and a predictive mathematical model for optimizing the expansion of electric power systems ORIRES, are described. The technology of collection and pre-processing of non-structured data collected from various sources and its loading to the object-oriented database, as well as processing and presentation of information in the GIS system are described. One of the approaches of graphical visualization of the results of optimization model is considered on the example of calculating the option for expansion of the South Korean electric power grid.

  19. An orientation-independent DIC microscope allows high resolution imaging of epithelial cell migration and wound healing in a cnidarian model.

    PubMed

    Malamy, J E; Shribak, M

    2018-06-01

    Epithelial cell dynamics can be difficult to study in intact animals or tissues. Here we use the medusa form of the hydrozoan Clytia hemisphaerica, which is covered with a monolayer of epithelial cells, to test the efficacy of an orientation-independent differential interference contrast microscope for in vivo imaging of wound healing. Orientation-independent differential interference contrast provides an unprecedented resolution phase image of epithelial cells closing a wound in a live, nontransgenic animal model. In particular, the orientation-independent differential interference contrast microscope equipped with a 40x/0.75NA objective lens and using the illumination light with wavelength 546 nm demonstrated a resolution of 460 nm. The repair of individual cells, the adhesion of cells to close a gap, and the concomitant contraction of these cells during closure is clearly visualized. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.

  20. Selective attention modulates neural substrates of repetition priming and "implicit" visual memory: suppressions and enhancements revealed by FMRI.

    PubMed

    Vuilleumier, Patrik; Schwartz, Sophie; Duhoux, Stéphanie; Dolan, Raymond J; Driver, Jon

    2005-08-01

    Attention can enhance processing for relevant information and suppress this for ignored stimuli. However, some residual processing may still arise without attention. Here we presented overlapping outline objects at study, with subjects attending to those in one color but not the other. Attended objects were subsequently recognized on a surprise memory test, whereas there was complete amnesia for ignored items on such direct explicit testing; yet reliable behavioral priming effects were found on indirect testing. Event-related fMRI examined neural responses to previously attended or ignored objects, now shown alone in the same or mirror-reversed orientation as before, intermixed with new items. Repetition-related decreases in fMRI responses for objects previously attended and repeated in the same orientation were found in the right posterior fusiform, lateral occipital, and left inferior frontal cortex. More anterior fusiform regions also showed some repetition decreases for ignored objects, irrespective of orientation. View-specific repetition decreases were found in the striate cortex, particularly for previously attended items. In addition, previously ignored objects produced some fMRI response increases in the bilateral lingual gyri, relative to new objects. Selective attention at exposure can thus produce several distinct long-term effects on processing of stimuli repeated later, with neural response suppression stronger for previously attended objects, and some response enhancement for previously ignored objects, with these effects arising in different brain areas. Although repetition decreases may relate to positive priming phenomena, the repetition increases for ignored objects shown here for the first time might relate to processes that can produce "negative priming" in some behavioral studies. These results reveal quantitative and qualitative differences between neural substrates of long-term repetition effects for attended versus unattended objects.

  1. Designing and visualizing the water-energy-food nexus system

    NASA Astrophysics Data System (ADS)

    Endo, A.; Kumazawa, T.; Yamada, M.; Kato, T.

    2017-12-01

    The objective of this study is to design and visualize a water-energy-food nexus system to identify the interrelationships between water-energy-food (WEF) resources and to understand the subsequent complexity of WEF nexus systems holistically, taking an interdisciplinary approach. Object-oriented concepts and ontology engineering methods were applied according to the hypothesis that the chains of changes in linkages between water, energy, and food resources holistically affect the water-energy-food nexus system, including natural and social systems, both temporally and spatially. The water-energy-food nexus system that is developed is significant because it allows us to: 1) visualize linkages between water, energy, and food resources in social and natural systems; 2) identify tradeoffs between these resources; 3) find a way of using resources efficiently or enhancing the synergy between the utilization of different resources; and 4) aid scenario planning using economic tools. The paper also discusses future challenges for applying the developed water-energy-food nexus system in other areas.

  2. Visual training paired with electrical stimulation of the basal forebrain improves orientation-selective visual acuity in the rat.

    PubMed

    Kang, Jun Il; Groleau, Marianne; Dotigny, Florence; Giguère, Hugo; Vaucher, Elvire

    2014-07-01

    The cholinergic afferents from the basal forebrain to the primary visual cortex play a key role in visual attention and cortical plasticity. These afferent fibers modulate acute and long-term responses of visual neurons to specific stimuli. The present study evaluates whether this cholinergic modulation of visual neurons results in cortical activity and visual perception changes. Awake adult rats were exposed repeatedly for 2 weeks to an orientation-specific grating with or without coupling this visual stimulation to an electrical stimulation of the basal forebrain. The visual acuity, as measured using a visual water maze before and after the exposure to the orientation-specific grating, was increased in the group of trained rats with simultaneous basal forebrain/visual stimulation. The increase in visual acuity was not observed when visual training or basal forebrain stimulation was performed separately or when cholinergic fibers were selectively lesioned prior to the visual stimulation. The visual evoked potentials show a long-lasting increase in cortical reactivity of the primary visual cortex after coupled visual/cholinergic stimulation, as well as c-Fos immunoreactivity of both pyramidal and GABAergic interneuron. These findings demonstrate that when coupled with visual training, the cholinergic system improves visual performance for the trained orientation probably through enhancement of attentional processes and cortical plasticity in V1 related to the ratio of excitatory/inhibitory inputs. This study opens the possibility of establishing efficient rehabilitation strategies for facilitating visual capacity.

  3. A linear model fails to predict orientation selectivity of cells in the cat visual cortex.

    PubMed Central

    Volgushev, M; Vidyasagar, T R; Pei, X

    1996-01-01

    1. Postsynaptic potentials (PSPs) evoked by visual stimulation in simple cells in the cat visual cortex were recorded using in vivo whole-cell technique. Responses to small spots of light presented at different positions over the receptive field and responses to elongated bars of different orientations centred on the receptive field were recorded. 2. To test whether a linear model can account for orientation selectivity of cortical neurones, responses to elongated bars were compared with responses predicted by a linear model from the receptive field map obtained from flashing spots. 3. The linear model faithfully predicted the preferred orientation, but not the degree of orientation selectivity or the sharpness of orientation tuning. The ratio of optimal to non-optimal responses was always underestimated by the model. 4. Thus non-linear mechanisms, which can include suppression of non-optimal responses and/or amplification of optimal responses, are involved in the generation of orientation selectivity in the primary visual cortex. PMID:8930828

  4. Ruby-Helix: an implementation of helical image processing based on object-oriented scripting language.

    PubMed

    Metlagel, Zoltan; Kikkawa, Yayoi S; Kikkawa, Masahide

    2007-01-01

    Helical image analysis in combination with electron microscopy has been used to study three-dimensional structures of various biological filaments or tubes, such as microtubules, actin filaments, and bacterial flagella. A number of packages have been developed to carry out helical image analysis. Some biological specimens, however, have a symmetry break (seam) in their three-dimensional structure, even though their subunits are mostly arranged in a helical manner. We refer to these objects as "asymmetric helices". All the existing packages are designed for helically symmetric specimens, and do not allow analysis of asymmetric helical objects, such as microtubules with seams. Here, we describe Ruby-Helix, a new set of programs for the analysis of "helical" objects with or without a seam. Ruby-Helix is built on top of the Ruby programming language and is the first implementation of asymmetric helical reconstruction for practical image analysis. It also allows easier and semi-automated analysis, performing iterative unbending and accurate determination of the repeat length. As a result, Ruby-Helix enables us to analyze motor-microtubule complexes with higher throughput to higher resolution.

  5. Eighteen-month-olds' memory for short movies of simple stories.

    PubMed

    Kingo, Osman S; Krøjgaard, Peter

    2015-04-01

    This study investigated twenty four 18-month-olds' memory for dynamic visual stimuli. During the first visit participants saw one of two brief movies (30 seconds) with a simple storyline displayed in four iterations. After 2 weeks, memory was tested in the visual paired comparison paradigm in which the familiar and the novel movie were contrasted simultaneously and displayed in two iterations for a total of 60 seconds. Eye-tracking revealed that participants fixated the familiar movie significantly more than the novel movie, thus indicating memory for the familiar movie. Furthermore, time-dependent analysis of the data revealed that individual differences in the looking-patterns for the first and second iteration of the movies were related to individual differences in productive vocabulary. We suggest that infants' vocabulary may be indicative of their ability to understand and remember the storyline of the movies, thereby affecting their subsequent memory. © 2015 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  6. Relevance of visual cues for orientation at familiar sites by homing pigeons: an experiment in a circular arena.

    PubMed Central

    Gagliardo, A.; Odetti, F.; Ioalè, P.

    2001-01-01

    Whether pigeons use visual landmarks for orientation from familiar locations has been a subject of debate. By recording the directional choices of both anosmic and control pigeons while exiting from a circular arena we were able to assess the relevance of olfactory and visual cues for orientation from familiar sites. When the birds could see the surroundings, both anosmic and control pigeons were homeward oriented. When the view of the landscape was prevented by screens that surrounded the arena, the control pigeons exited from the arena approximately in the home direction, while the anosmic pigeons' distribution was not different from random. Our data suggest that olfactory and visual cues play a critical, but interchangeable, role for orientation at familiar sites. PMID:11571054

  7. Clothing Construction: An Instructional Package with Adaptations for Visually Impaired Individuals.

    ERIC Educational Resources Information Center

    Crawford, Glinda B.; And Others

    Developed for the home economics teacher of mainstreamed visually impaired students, this guide provides clothing instruction lesson plans for the junior high level. First, teacher guidelines are given, including characteristics of the visually impaired, orienting such students to the classroom, orienting class members to the visually impaired,…

  8. Mental object rotation in Parkinson's disease.

    PubMed

    Crucian, Gregory P; Barrett, Anna M; Burks, David W; Riestra, Alonso R; Roth, Heidi L; Schwartz, Ronald L; Triggs, William J; Bowers, Dawn; Friedman, William; Greer, Melvin; Heilman, Kenneth M

    2003-11-01

    Deficits in visual-spatial ability can be associated with Parkinson's disease (PD), and there are several possible reasons for these deficits. Dysfunction in frontal-striatal and/or frontal-parietal systems, associated with dopamine deficiency, might disrupt cognitive processes either supporting (e.g., working memory) or subserving visual-spatial computations. The goal of this study was to assess visual-spatial orientation ability in individuals with PD using the Mental Rotations Test (MRT), along with other measures of cognitive function. Non-demented men with PD were significantly less accurate on this test than matched control men. In contrast, women with PD performed similarly to matched control women, but both groups of women did not perform much better than chance. Further, mental rotation accuracy in men correlated with their executive skills involving mental processing and psychomotor speed. In women with PD, however, mental rotation accuracy correlated negatively with verbal memory, indicating that higher mental rotation performance was associated with lower ability in verbal memory. These results indicate that PD is associated with visual-spatial orientation deficits in men. Women with PD and control women both performed poorly on the MRT, possibly reflecting a floor effect. Although men and women with PD appear to engage different cognitive processes in this task, the reason for the sex difference remains to be elucidated.

  9. An iterative method for near-field Fresnel region polychromatic phase contrast imaging

    NASA Astrophysics Data System (ADS)

    Carroll, Aidan J.; van Riessen, Grant A.; Balaur, Eugeniu; Dolbnya, Igor P.; Tran, Giang N.; Peele, Andrew G.

    2017-07-01

    We present an iterative method for polychromatic phase contrast imaging that is suitable for broadband illumination and which allows for the quantitative determination of the thickness of an object given the refractive index of the sample material. Experimental and simulation results suggest the iterative method provides comparable image quality and quantitative object thickness determination when compared to the analytical polychromatic transport of intensity and contrast transfer function methods. The ability of the iterative method to work over a wider range of experimental conditions means the iterative method is a suitable candidate for use with polychromatic illumination and may deliver more utility for laboratory-based x-ray sources, which typically have a broad spectrum.

  10. WE-AB-207A-08: BEST IN PHYSICS (IMAGING): Advanced Scatter Correction and Iterative Reconstruction for Improved Cone-Beam CT Imaging On the TrueBeam Radiotherapy Machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, A; Paysan, P; Brehm, M

    2016-06-15

    Purpose: To improve CBCT image quality for image-guided radiotherapy by applying advanced reconstruction algorithms to overcome scatter, noise, and artifact limitations Methods: CBCT is used extensively for patient setup in radiotherapy. However, image quality generally falls short of diagnostic CT, limiting soft-tissue based positioning and potential applications such as adaptive radiotherapy. The conventional TrueBeam CBCT reconstructor uses a basic scatter correction and FDK reconstruction, resulting in residual scatter artifacts, suboptimal image noise characteristics, and other artifacts like cone-beam artifacts. We have developed an advanced scatter correction that uses a finite-element solver (AcurosCTS) to model the behavior of photons as theymore » pass (and scatter) through the object. Furthermore, iterative reconstruction is applied to the scatter-corrected projections, enforcing data consistency with statistical weighting and applying an edge-preserving image regularizer to reduce image noise. The combined algorithms have been implemented on a GPU. CBCT projections from clinically operating TrueBeam systems have been used to compare image quality between the conventional and improved reconstruction methods. Planning CT images of the same patients have also been compared. Results: The advanced scatter correction removes shading and inhomogeneity artifacts, reducing the scatter artifact from 99.5 HU to 13.7 HU in a typical pelvis case. Iterative reconstruction provides further benefit by reducing image noise and eliminating streak artifacts, thereby improving soft-tissue visualization. In a clinical head and pelvis CBCT, the noise was reduced by 43% and 48%, respectively, with no change in spatial resolution (assessed visually). Additional benefits include reduction of cone-beam artifacts and reduction of metal artifacts due to intrinsic downweighting of corrupted rays. Conclusion: The combination of an advanced scatter correction with iterative reconstruction substantially improves CBCT image quality. It is anticipated that clinically acceptable reconstruction times will result from a multi-GPU implementation (the algorithms are under active development and not yet commercially available). All authors are employees of and (may) own stock of Varian Medical Systems.« less

  11. Shaping Attention with Reward: Effects of Reward on Space- and Object-Based Selection

    PubMed Central

    Shomstein, Sarah; Johnson, Jacoba

    2014-01-01

    The contribution of rewarded actions to automatic attentional selection remains obscure. We hypothesized that some forms of automatic orienting, such as object-based selection, can be completely abandoned in lieu of reward maximizing strategy. While presenting identical visual stimuli to the observer, in a set of two experiments, we manipulate what is being rewarded (different object targets or random object locations) and the type of reward received (money or points). It was observed that reward alone guides attentional selection, entirely predicting behavior. These results suggest that guidance of selective attention, while automatic, is flexible and can be adjusted in accordance with external non-sensory reward-based factors. PMID:24121412

  12. An edge-directed interpolation method for fetal spine MR images.

    PubMed

    Yu, Shaode; Zhang, Rui; Wu, Shibin; Hu, Jiani; Xie, Yaoqin

    2013-10-10

    Fetal spinal magnetic resonance imaging (MRI) is a prenatal routine for proper assessment of fetus development, especially when suspected spinal malformations occur while ultrasound fails to provide details. Limited by hardware, fetal spine MR images suffer from its low resolution.High-resolution MR images can directly enhance readability and improve diagnosis accuracy. Image interpolation for higher resolution is required in clinical situations, while many methods fail to preserve edge structures. Edge carries heavy structural messages of objects in visual scenes for doctors to detect suspicions, classify malformations and make correct diagnosis. Effective interpolation with well-preserved edge structures is still challenging. In this paper, we propose an edge-directed interpolation (EDI) method and apply it on a group of fetal spine MR images to evaluate its feasibility and performance. This method takes edge messages from Canny edge detector to guide further pixel modification. First, low-resolution (LR) images of fetal spine are interpolated into high-resolution (HR) images with targeted factor by bi-linear method. Then edge information from LR and HR images is put into a twofold strategy to sharpen or soften edge structures. Finally a HR image with well-preserved edge structures is generated. The HR images obtained from proposed method are validated and compared with that from other four EDI methods. Performances are evaluated from six metrics, and subjective analysis of visual quality is based on regions of interest (ROI). All these five EDI methods are able to generate HR images with enriched details. From quantitative analysis of six metrics, the proposed method outperforms the other four from signal-to-noise ratio (SNR), peak signal-to-noise ratio (PSNR), structure similarity index (SSIM), feature similarity index (FSIM) and mutual information (MI) with seconds-level time consumptions (TC). Visual analysis of ROI shows that the proposed method maintains better consistency in edge structures with the original images. The proposed method classifies edge orientations into four categories and well preserves structures. It generates convincing HR images with fine details and is suitable in real-time situations. Iterative curvature-based interpolation (ICBI) method may result in crisper edges, while the other three methods are sensitive to noise and artifacts.

  13. Neuropsychological Components of Object Identification

    DTIC Science & Technology

    1992-01-10

    Man. Urbana, IL: University of Illinois Press. Bauer, R. M., and Rubens, A. B. (1985). Agnosia . In K. M. Heilman and E. Valenstein (Eds.), Clinical...J. (1987). Apperceptive agnosia : the specification and description of constructs. In Humphreys, G. W., and Riddoch, M. J. (1987a) (Eds.). Visual... agnosias , achromatopsia, Balint’s syndrome and related difficulties of orientation and construction. In M.-M. Mesulam (Ed.), Principles of Behavioral

  14. Three-dimensional visual feature representation in the primary visual cortex

    PubMed Central

    Tanaka, Shigeru; Moon, Chan-Hong; Fukuda, Mitsuhiro; Kim, Seong-Gi

    2011-01-01

    In the cat primary visual cortex, it is accepted that neurons optimally responding to similar stimulus orientations are clustered in a column extending from the superficial to deep layers. The cerebral cortex is, however, folded inside a skull, which makes gyri and fundi. The primary visual area of cats, area 17, is located on the fold of the cortex called the lateral gyrus. These facts raise the question of how to reconcile the tangential arrangement of the orientation columns with the curvature of the gyrus. In the present study, we show a possible configuration of feature representation in the visual cortex using a three-dimensional (3D) self-organization model. We took into account preferred orientation, preferred direction, ocular dominance and retinotopy, assuming isotropic interaction. We performed computer simulation only in the middle layer at the beginning and expanded the range of simulation gradually to other layers, which was found to be a unique method in the present model for obtaining orientation columns spanning all the layers in the flat cortex. Vertical columns of preferred orientations were found in the flat parts of the model cortex. On the other hand, in the curved parts, preferred orientations were represented in wedge-like columns rather than straight columns, and preferred directions were frequently reversed in the deeper layers. Singularities associated with orientation representation appeared as warped lines in the 3D model cortex. Direction reversal appeared on the sheets that were delimited by orientation-singularity lines. These structures emerged from the balance between periodic arrangements of preferred orientations and vertical alignment of same orientations. Our theoretical predictions about orientation representation were confirmed by multi-slice, high-resolution functional MRI in the cat visual cortex. We obtained a close agreement between theoretical predictions and experimental observations. The present study throws a doubt about the conventional columnar view of orientation representation, although more experimental data are needed. PMID:21724370

  15. Three-dimensional visual feature representation in the primary visual cortex.

    PubMed

    Tanaka, Shigeru; Moon, Chan-Hong; Fukuda, Mitsuhiro; Kim, Seong-Gi

    2011-12-01

    In the cat primary visual cortex, it is accepted that neurons optimally responding to similar stimulus orientations are clustered in a column extending from the superficial to deep layers. The cerebral cortex is, however, folded inside a skull, which makes gyri and fundi. The primary visual area of cats, area 17, is located on the fold of the cortex called the lateral gyrus. These facts raise the question of how to reconcile the tangential arrangement of the orientation columns with the curvature of the gyrus. In the present study, we show a possible configuration of feature representation in the visual cortex using a three-dimensional (3D) self-organization model. We took into account preferred orientation, preferred direction, ocular dominance and retinotopy, assuming isotropic interaction. We performed computer simulation only in the middle layer at the beginning and expanded the range of simulation gradually to other layers, which was found to be a unique method in the present model for obtaining orientation columns spanning all the layers in the flat cortex. Vertical columns of preferred orientations were found in the flat parts of the model cortex. On the other hand, in the curved parts, preferred orientations were represented in wedge-like columns rather than straight columns, and preferred directions were frequently reversed in the deeper layers. Singularities associated with orientation representation appeared as warped lines in the 3D model cortex. Direction reversal appeared on the sheets that were delimited by orientation-singularity lines. These structures emerged from the balance between periodic arrangements of preferred orientations and vertical alignment of the same orientations. Our theoretical predictions about orientation representation were confirmed by multi-slice, high-resolution functional MRI in the cat visual cortex. We obtained a close agreement between theoretical predictions and experimental observations. The present study throws a doubt about the conventional columnar view of orientation representation, although more experimental data are needed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. 3D topology of orientation columns in visual cortex revealed by functional optical coherence tomography.

    PubMed

    Nakamichi, Yu; Kalatsky, Valery A; Watanabe, Hideyuki; Sato, Takayuki; Rajagopalan, Uma Maheswari; Tanifuji, Manabu

    2018-04-01

    Orientation tuning is a canonical neuronal response property of six-layer visual cortex that is encoded in pinwheel structures with center orientation singularities. Optical imaging of intrinsic signals enables us to map these surface two-dimensional (2D) structures, whereas lack of appropriate techniques has not allowed us to visualize depth structures of orientation coding. In the present study, we performed functional optical coherence tomography (fOCT), a technique capable of acquiring a 3D map of the intrinsic signals, to study the topology of orientation coding inside the cat visual cortex. With this technique, for the first time, we visualized columnar assemblies in orientation coding that had been predicted from electrophysiological recordings. In addition, we found that the columnar structures were largely distorted around pinwheel centers: center singularities were not rigid straight lines running perpendicularly to the cortical surface but formed twisted string-like structures inside the cortex that turned and extended horizontally through the cortex. Looping singularities were observed with their respective termini accessing the same cortical surface via clockwise and counterclockwise orientation pinwheels. These results suggest that a 3D topology of orientation coding cannot be fully anticipated from 2D surface measurements. Moreover, the findings demonstrate the utility of fOCT as an in vivo mesoscale imaging method for mapping functional response properties of cortex in the depth axis. NEW & NOTEWORTHY We used functional optical coherence tomography (fOCT) to visualize three-dimensional structure of the orientation columns with millimeter range and micrometer spatial resolution. We validated vertically elongated columnar structure in iso-orientation domains. The columnar structure was distorted around pinwheel centers. An orientation singularity formed a string with tortuous trajectories inside the cortex and connected clockwise and counterclockwise pinwheel centers in the surface orientation map. The results were confirmed by comparisons with conventional optical imaging and electrophysiological recordings.

  17. Contrast invariance of orientation tuning in the lateral geniculate nucleus of the feline visual system.

    PubMed

    Viswanathan, Sivaram; Jayakumar, Jaikishan; Vidyasagar, Trichur R

    2015-09-01

    Responses of most neurons in the primary visual cortex of mammals are markedly selective for stimulus orientation and their orientation tuning does not vary with changes in stimulus contrast. The basis of such contrast invariance of orientation tuning has been shown to be the higher variability in the response for low-contrast stimuli. Neurons in the lateral geniculate nucleus (LGN), which provides the major visual input to the cortex, have also been shown to have higher variability in their response to low-contrast stimuli. Parallel studies have also long established mild degrees of orientation selectivity in LGN and retinal cells. In our study, we show that contrast invariance of orientation tuning is already present in the LGN. In addition, we show that the variability of spike responses of LGN neurons increases at lower stimulus contrasts, especially for non-preferred orientations. We suggest that such contrast- and orientation-sensitive variability not only explains the contrast invariance observed in the LGN but can also underlie the contrast-invariant orientation tuning seen at the level of the primary visual cortex. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  18. Feature integration across space, time, and orientation

    PubMed Central

    Otto, Thomas U.; Öğmen, Haluk; Herzog, Michael H.

    2012-01-01

    The perception of a visual target can be strongly influenced by flanking stimuli. In static displays, performance on the target improves when the distance to the flanking elements increases- proposedly because feature pooling and integration vanishes with distance. Here, we studied feature integration with dynamic stimuli. We show that features of single elements presented within a continuous motion stream are integrated largely independent of spatial distance (and orientation). Hence, space based models of feature integration cannot be extended to dynamic stimuli. We suggest that feature integration is guided by perceptual grouping operations that maintain the identity of perceptual objects over space and time. PMID:19968428

  19. Combined influence of visual scene and body tilt on arm pointing movements: gravity matters!

    PubMed

    Scotto Di Cesare, Cécile; Sarlegna, Fabrice R; Bourdin, Christophe; Mestre, Daniel R; Bringoux, Lionel

    2014-01-01

    Performing accurate actions such as goal-directed arm movements requires taking into account visual and body orientation cues to localize the target in space and produce appropriate reaching motor commands. We experimentally tilted the body and/or the visual scene to investigate how visual and body orientation cues are combined for the control of unseen arm movements. Subjects were asked to point toward a visual target using an upward movement during slow body and/or visual scene tilts. When the scene was tilted, final pointing errors varied as a function of the direction of the scene tilt (forward or backward). Actual forward body tilt resulted in systematic target undershoots, suggesting that the brain may have overcompensated for the biomechanical movement facilitation arising from body tilt. Combined body and visual scene tilts also affected final pointing errors according to the orientation of the visual scene. The data were further analysed using either a body-centered or a gravity-centered reference frame to encode visual scene orientation with simple additive models (i.e., 'combined' tilts equal to the sum of 'single' tilts). We found that the body-centered model could account only for some of the data regarding kinematic parameters and final errors. In contrast, the gravity-centered modeling in which the body and visual scene orientations were referred to vertical could explain all of these data. Therefore, our findings suggest that the brain uses gravity, thanks to its invariant properties, as a reference for the combination of visual and non-visual cues.

  20. Combined Influence of Visual Scene and Body Tilt on Arm Pointing Movements: Gravity Matters!

    PubMed Central

    Scotto Di Cesare, Cécile; Sarlegna, Fabrice R.; Bourdin, Christophe; Mestre, Daniel R.; Bringoux, Lionel

    2014-01-01

    Performing accurate actions such as goal-directed arm movements requires taking into account visual and body orientation cues to localize the target in space and produce appropriate reaching motor commands. We experimentally tilted the body and/or the visual scene to investigate how visual and body orientation cues are combined for the control of unseen arm movements. Subjects were asked to point toward a visual target using an upward movement during slow body and/or visual scene tilts. When the scene was tilted, final pointing errors varied as a function of the direction of the scene tilt (forward or backward). Actual forward body tilt resulted in systematic target undershoots, suggesting that the brain may have overcompensated for the biomechanical movement facilitation arising from body tilt. Combined body and visual scene tilts also affected final pointing errors according to the orientation of the visual scene. The data were further analysed using either a body-centered or a gravity-centered reference frame to encode visual scene orientation with simple additive models (i.e., ‘combined’ tilts equal to the sum of ‘single’ tilts). We found that the body-centered model could account only for some of the data regarding kinematic parameters and final errors. In contrast, the gravity-centered modeling in which the body and visual scene orientations were referred to vertical could explain all of these data. Therefore, our findings suggest that the brain uses gravity, thanks to its invariant properties, as a reference for the combination of visual and non-visual cues. PMID:24925371

  1. Dynamic modulation of ocular orientation during visually guided saccades and smooth-pursuit eye movements

    NASA Technical Reports Server (NTRS)

    Hess, Bernhard J M.; Angelaki, Dora E.

    2003-01-01

    Rotational disturbances of the head about an off-vertical yaw axis induce a complex vestibuloocular reflex pattern that reflects the brain's estimate of head angular velocity as well as its estimate of instantaneous head orientation (at a reduced scale) in space coordinates. We show that semicircular canal and otolith inputs modulate torsional and, to a certain extent, also vertical ocular orientation of visually guided saccades and smooth-pursuit eye movements in a similar manner as during off-vertical axis rotations in complete darkness. It is suggested that this graviceptive control of eye orientation facilitates rapid visual spatial orientation during motion.

  2. Human Occipital and Parietal GABA Selectively Influence Visual Perception of Orientation and Size.

    PubMed

    Song, Chen; Sandberg, Kristian; Andersen, Lau Møller; Blicher, Jakob Udby; Rees, Geraint

    2017-09-13

    GABA is the primary inhibitory neurotransmitter in human brain. The level of GABA varies substantially across individuals, and this variability is associated with interindividual differences in visual perception. However, it remains unclear whether the association between GABA level and visual perception reflects a general influence of visual inhibition or whether the GABA levels of different cortical regions selectively influence perception of different visual features. To address this, we studied how the GABA levels of parietal and occipital cortices related to interindividual differences in size, orientation, and brightness perception. We used visual contextual illusion as a perceptual assay since the illusion dissociates perceptual content from stimulus content and the magnitude of the illusion reflects the effect of visual inhibition. Across individuals, we observed selective correlations between the level of GABA and the magnitude of contextual illusion. Specifically, parietal GABA level correlated with size illusion magnitude but not with orientation or brightness illusion magnitude; in contrast, occipital GABA level correlated with orientation illusion magnitude but not with size or brightness illusion magnitude. Our findings reveal a region- and feature-dependent influence of GABA level on human visual perception. Parietal and occipital cortices contain, respectively, topographic maps of size and orientation preference in which neural responses to stimulus sizes and stimulus orientations are modulated by intraregional lateral connections. We propose that these lateral connections may underlie the selective influence of GABA on visual perception. SIGNIFICANCE STATEMENT GABA, the primary inhibitory neurotransmitter in human visual system, varies substantially across individuals. This interindividual variability in GABA level is linked to interindividual differences in many aspects of visual perception. However, the widespread influence of GABA raises the question of whether interindividual variability in GABA reflects an overall variability in visual inhibition and has a general influence on visual perception or whether the GABA levels of different cortical regions have selective influence on perception of different visual features. Here we report a region- and feature-dependent influence of GABA level on human visual perception. Our findings suggest that GABA level of a cortical region selectively influences perception of visual features that are topographically mapped in this region through intraregional lateral connections. Copyright © 2017 Song, Sandberg et al.

  3. Visual orientation performances of desert ants (Cataglyphis bicolor) toward astromenotactic directions and horizon landmarks

    NASA Technical Reports Server (NTRS)

    Wehner, R.

    1972-01-01

    Experimental data, on the visual orientation of desert ants toward astromenotactic courses and horizon landmarks involving the cooperation of different direction finding systems, are given. Attempts were made to: (1) determine if the ants choose a compromise direction between astromenotactic angles and the direction toward horizon landmarks when both angles compete with each other or whether they decide alternatively; (2) analyze adaptations of the visual system to the special demands of direction finding by astromenotactic orientation or pattern recognition; and (3) determine parameters of visual learning behavior. Results show separate orientation mechanisms are responsible for the orientation of the ant toward astromenotactic angles and horizon landmarks. If both systems compete with each other, the ants switch over from one system to the other and do not perform a compromise direction.

  4. The Effect of Looming and Receding Sounds on the Perceived In-Depth Orientation of Depth-Ambiguous Biological Motion Figures

    PubMed Central

    Schouten, Ben; Troje, Nikolaus F.; Vroomen, Jean; Verfaillie, Karl

    2011-01-01

    Background The focus in the research on biological motion perception traditionally has been restricted to the visual modality. Recent neurophysiological and behavioural evidence, however, supports the idea that actions are not represented merely visually but rather audiovisually. The goal of the present study was to test whether the perceived in-depth orientation of depth-ambiguous point-light walkers (plws) is affected by the presentation of looming or receding sounds synchronized with the footsteps. Methodology/Principal Findings In Experiment 1 orthographic frontal/back projections of plws were presented either without sound or with sounds of which the intensity level was rising (looming), falling (receding) or stationary. Despite instructions to ignore the sounds and to only report the visually perceived in-depth orientation, plws accompanied with looming sounds were more often judged to be facing the viewer whereas plws paired with receding sounds were more often judged to be facing away from the viewer. To test whether the effects observed in Experiment 1 act at a perceptual level rather than at the decisional level, in Experiment 2 observers perceptually compared orthographic plws without sound or paired with either looming or receding sounds to plws without sound but with perspective cues making them objectively either facing towards or facing away from the viewer. Judging whether either an orthographic plw or a plw with looming (receding) perspective cues is visually most looming becomes harder (easier) when the orthographic plw is paired with looming sounds. Conclusions/Significance The present results suggest that looming and receding sounds alter the judgements of the in-depth orientation of depth-ambiguous point-light walkers. While looming sounds are demonstrated to act at a perceptual level and make plws look more looming, it remains a challenge for future research to clarify at what level in the processing hierarchy receding sounds affect how observers judge the in-depth perception of plws. PMID:21373181

  5. SOVEREIGN: An autonomous neural system for incrementally learning planned action sequences to navigate towards a rewarded goal.

    PubMed

    Gnadt, William; Grossberg, Stephen

    2008-06-01

    How do reactive and planned behaviors interact in real time? How are sequences of such behaviors released at appropriate times during autonomous navigation to realize valued goals? Controllers for both animals and mobile robots, or animats, need reactive mechanisms for exploration, and learned plans to reach goal objects once an environment becomes familiar. The SOVEREIGN (Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goal-oriented Navigation) animat model embodies these capabilities, and is tested in a 3D virtual reality environment. SOVEREIGN includes several interacting subsystems which model complementary properties of cortical What and Where processing streams and which clarify similarities between mechanisms for navigation and arm movement control. As the animat explores an environment, visual inputs are processed by networks that are sensitive to visual form and motion in the What and Where streams, respectively. Position-invariant and size-invariant recognition categories are learned by real-time incremental learning in the What stream. Estimates of target position relative to the animat are computed in the Where stream, and can activate approach movements toward the target. Motion cues from animat locomotion can elicit head-orienting movements to bring a new target into view. Approach and orienting movements are alternately performed during animat navigation. Cumulative estimates of each movement are derived from interacting proprioceptive and visual cues. Movement sequences are stored within a motor working memory. Sequences of visual categories are stored in a sensory working memory. These working memories trigger learning of sensory and motor sequence categories, or plans, which together control planned movements. Predictively effective chunk combinations are selectively enhanced via reinforcement learning when the animat is rewarded. Selected planning chunks effect a gradual transition from variable reactive exploratory movements to efficient goal-oriented planned movement sequences. Volitional signals gate interactions between model subsystems and the release of overt behaviors. The model can control different motor sequences under different motivational states and learns more efficient sequences to rewarded goals as exploration proceeds.

  6. Neuropsychology: the touchy, feely side of vision.

    PubMed

    Walsh, V

    2000-01-13

    Some visual attributes, such as colour, are purely visual, but others, such as orientation and movement, can be perceived by touch or audition. A magnetic stimulation study has now shown that the perception of tactile orientation may be influenced by visual Information.

  7. Combined Use of Automatic Tube Voltage Selection and Current Modulation with Iterative Reconstruction for CT Evaluation of Small Hypervascular Hepatocellular Carcinomas: Effect on Lesion Conspicuity and Image Quality

    PubMed Central

    Lv, Peijie; Liu, Jie; Zhang, Rui; Jia, Yan

    2015-01-01

    Objective To assess the lesion conspicuity and image quality in CT evaluation of small (≤ 3 cm) hepatocellular carcinomas (HCCs) using automatic tube voltage selection (ATVS) and automatic tube current modulation (ATCM) with or without iterative reconstruction. Materials and Methods One hundred and five patients with 123 HCC lesions were included. Fifty-seven patients were scanned using both ATVS and ATCM and images were reconstructed using either filtered back-projection (FBP) (group A1) or sinogram-affirmed iterative reconstruction (SAFIRE) (group A2). Forty-eight patients were imaged using only ATCM, with a fixed tube potential of 120 kVp and FBP reconstruction (group B). Quantitative parameters (image noise in Hounsfield unit and contrast-to-noise ratio of the aorta, the liver, and the hepatic tumors) and qualitative visual parameters (image noise, overall image quality, and lesion conspicuity as graded on a 5-point scale) were compared among the groups. Results Group A2 scanned with the automatically chosen 80 kVp and 100 kVp tube voltages ranked the best in lesion conspicuity and subjective and objective image quality (p values ranging from < 0.001 to 0.004) among the three groups, except for overall image quality between group A2 and group B (p = 0.022). Group A1 showed higher image noise (p = 0.005) but similar lesion conspicuity and overall image quality as compared with group B. The radiation dose in group A was 19% lower than that in group B (p = 0.022). Conclusion CT scanning with combined use of ATVS and ATCM and image reconstruction with SAFIRE algorithm provides higher lesion conspicuity and better image quality for evaluating small hepatic HCCs with radiation dose reduction. PMID:25995682

  8. Registration of partially overlapping surfaces for range image based augmented reality on mobile devices

    NASA Astrophysics Data System (ADS)

    Kilgus, T.; Franz, A. M.; Seitel, A.; Marz, K.; Bartha, L.; Fangerau, M.; Mersmann, S.; Groch, A.; Meinzer, H.-P.; Maier-Hein, L.

    2012-02-01

    Visualization of anatomical data for disease diagnosis, surgical planning, or orientation during interventional therapy is an integral part of modern health care. However, as anatomical information is typically shown on monitors provided by a radiological work station, the physician has to mentally transfer internal structures shown on the screen to the patient. To address this issue, we recently presented a new approach to on-patient visualization of 3D medical images, which combines the concept of augmented reality (AR) with an intuitive interaction scheme. Our method requires mounting a range imaging device, such as a Time-of-Flight (ToF) camera, to a portable display (e.g. a tablet PC). During the visualization process, the pose of the camera and thus the viewing direction of the user is continuously determined with a surface matching algorithm. By moving the device along the body of the patient, the physician is given the impression of looking directly into the human body. In this paper, we present and evaluate a new method for camera pose estimation based on an anisotropic trimmed variant of the well-known iterative closest point (ICP) algorithm. According to in-silico and in-vivo experiments performed with computed tomography (CT) and ToF data of human faces, knees and abdomens, our new method is better suited for surface registration with ToF data than the established trimmed variant of the ICP, reducing the target registration error (TRE) by more than 60%. The TRE obtained (approx. 4-5 mm) is promising for AR visualization, but clinical applications require maximization of robustness and run-time.

  9. Reconstruction from limited single-particle diffraction data via simultaneous determination of state, orientation, intensity, and phase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donatelli, Jeffrey J.; Sethian, James A.; Zwart, Peter H.

    Free-electron lasers now have the ability to collect X-ray diffraction patterns from individual molecules; however, each sample is delivered at unknown orientation and may be in one of several conformational states, each with a different molecular structure. Hit rates are often low, typically around 0.1%, limiting the number of useful images that can be collected. Determining accurate structural information requires classifying and orienting each image, accurately assembling them into a 3D diffraction intensity function, and determining missing phase information. Additionally, single particles typically scatter very few photons, leading to high image noise levels. We develop a multitiered iterative phasing algorithmmore » to reconstruct structural information from singleparticle diffraction data by simultaneously determining the states, orientations, intensities, phases, and underlying structure in a single iterative procedure. We leverage real-space constraints on the structure to help guide optimization and reconstruct underlying structure from very few images with excellent global convergence properties. We show that this approach can determine structural resolution beyond what is suggested by standard Shannon sampling arguments for ideal images and is also robust to noise.« less

  10. Reconstruction from limited single-particle diffraction data via simultaneous determination of state, orientation, intensity, and phase

    DOE PAGES

    Donatelli, Jeffrey J.; Sethian, James A.; Zwart, Peter H.

    2017-06-26

    Free-electron lasers now have the ability to collect X-ray diffraction patterns from individual molecules; however, each sample is delivered at unknown orientation and may be in one of several conformational states, each with a different molecular structure. Hit rates are often low, typically around 0.1%, limiting the number of useful images that can be collected. Determining accurate structural information requires classifying and orienting each image, accurately assembling them into a 3D diffraction intensity function, and determining missing phase information. Additionally, single particles typically scatter very few photons, leading to high image noise levels. We develop a multitiered iterative phasing algorithmmore » to reconstruct structural information from singleparticle diffraction data by simultaneously determining the states, orientations, intensities, phases, and underlying structure in a single iterative procedure. We leverage real-space constraints on the structure to help guide optimization and reconstruct underlying structure from very few images with excellent global convergence properties. We show that this approach can determine structural resolution beyond what is suggested by standard Shannon sampling arguments for ideal images and is also robust to noise.« less

  11. Object attributes combine additively in visual search

    PubMed Central

    Pramod, R. T.; Arun, S. P.

    2016-01-01

    We perceive objects as containing a variety of attributes: local features, relations between features, internal details, and global properties. But we know little about how they combine. Here, we report a remarkably simple additive rule that governs how these diverse object attributes combine in vision. The perceived dissimilarity between two objects was accurately explained as a sum of (a) spatially tuned local contour-matching processes modulated by part decomposition; (b) differences in internal details, such as texture; (c) differences in emergent attributes, such as symmetry; and (d) differences in global properties, such as orientation or overall configuration of parts. Our results elucidate an enduring question in object vision by showing that the whole object is not a sum of its parts but a sum of its many attributes. PMID:26967014

  12. Perception of the dynamic visual vertical during sinusoidal linear motion.

    PubMed

    Pomante, A; Selen, L P J; Medendorp, W P

    2017-10-01

    The vestibular system provides information for spatial orientation. However, this information is ambiguous: because the otoliths sense the gravitoinertial force, they cannot distinguish gravitational and inertial components. As a consequence, prolonged linear acceleration of the head can be interpreted as tilt, referred to as the somatogravic effect. Previous modeling work suggests that the brain disambiguates the otolith signal according to the rules of Bayesian inference, combining noisy canal cues with the a priori assumption that prolonged linear accelerations are unlikely. Within this modeling framework the noise of the vestibular signals affects the dynamic characteristics of the tilt percept during linear whole-body motion. To test this prediction, we devised a novel paradigm to psychometrically characterize the dynamic visual vertical-as a proxy for the tilt percept-during passive sinusoidal linear motion along the interaural axis (0.33 Hz motion frequency, 1.75 m/s 2 peak acceleration, 80 cm displacement). While subjects ( n =10) kept fixation on a central body-fixed light, a line was briefly flashed (5 ms) at different phases of the motion, the orientation of which had to be judged relative to gravity. Consistent with the model's prediction, subjects showed a phase-dependent modulation of the dynamic visual vertical, with a subject-specific phase shift with respect to the imposed acceleration signal. The magnitude of this modulation was smaller than predicted, suggesting a contribution of nonvestibular signals to the dynamic visual vertical. Despite their dampening effect, our findings may point to a link between the noise components in the vestibular system and the characteristics of dynamic visual vertical. NEW & NOTEWORTHY A fundamental question in neuroscience is how the brain processes vestibular signals to infer the orientation of the body and objects in space. We show that, under sinusoidal linear motion, systematic error patterns appear in the disambiguation of linear acceleration and spatial orientation. We discuss the dynamics of these illusory percepts in terms of a dynamic Bayesian model that combines uncertainty in the vestibular signals with priors based on the natural statistics of head motion. Copyright © 2017 the American Physiological Society.

  13. Seeing Objects as Faces Enhances Object Detection.

    PubMed

    Takahashi, Kohske; Watanabe, Katsumi

    2015-10-01

    The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness.

  14. Seeing Objects as Faces Enhances Object Detection

    PubMed Central

    Watanabe, Katsumi

    2015-01-01

    The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness. PMID:27648219

  15. Layer-oriented multigrid wavefront reconstruction algorithms for multi-conjugate adaptive optics

    NASA Astrophysics Data System (ADS)

    Gilles, Luc; Ellerbroek, Brent L.; Vogel, Curtis R.

    2003-02-01

    Multi-conjugate adaptive optics (MCAO) systems with 104-105 degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wavefront control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of AO degrees of freedom. In this paper, we develop an iterative sparse matrix implementation of minimum variance wavefront reconstruction for telescope diameters up to 32m with more than 104 actuators. The basic approach is the preconditioned conjugate gradient method, using a multigrid preconditioner incorporating a layer-oriented (block) symmetric Gauss-Seidel iterative smoothing operator. We present open-loop numerical simulation results to illustrate algorithm convergence.

  16. Development of orientation tuning in simple cells of primary visual cortex

    PubMed Central

    Moore, Bartlett D.

    2012-01-01

    Orientation selectivity and its development are basic features of visual cortex. The original model of orientation selectivity proposes that elongated simple cell receptive fields are constructed from convergent input of an array of lateral geniculate nucleus neurons. However, orientation selectivity of simple cells in the visual cortex is generally greater than the linear contributions based on projections from spatial receptive field profiles. This implies that additional selectivity may arise from intracortical mechanisms. The hierarchical processing idea implies mainly linear connections, whereas cortical contributions are generally considered to be nonlinear. We have explored development of orientation selectivity in visual cortex with a focus on linear and nonlinear factors in a population of anesthetized 4-wk postnatal kittens and adult cats. Linear contributions are estimated from receptive field maps by which orientation tuning curves are generated and bandwidth is quantified. Nonlinear components are estimated as the magnitude of the power function relationship between responses measured from drifting sinusoidal gratings and those predicted from the spatial receptive field. Measured bandwidths for kittens are slightly larger than those in adults, whereas predicted bandwidths are substantially broader. These results suggest that relatively strong nonlinearities in early postnatal stages are substantially involved in the development of orientation tuning in visual cortex. PMID:22323631

  17. High Resolution Signal Processing

    DTIC Science & Technology

    1993-08-19

    Donald Tufts, Journal of Visual Communication and Image Representation, Vol.2, No. 4 PP.395-404, December 1991 "* "Iterative Realization of the...Chen and Donald Tufts , Journal of Visual Communication and Image Representation, Vol.2, No. 4 PP.395-404, December 1991. * "Fast Maximum Likelihood

  18. An iterative sinogram gap-filling method with object- and scanner-dedicated discrete cosine transform (DCT)-domain filters for high resolution PET scanners.

    PubMed

    Kim, Kwangdon; Lee, Kisung; Lee, Hakjae; Joo, Sungkwan; Kang, Jungwon

    2018-01-01

    We aimed to develop a gap-filling algorithm, in particular the filter mask design method of the algorithm, which optimizes the filter to the imaging object by an adaptive and iterative process, rather than by manual means. Two numerical phantoms (Shepp-Logan and Jaszczak) were used for sinogram generation. The algorithm works iteratively, not only on the gap-filling iteration but also on the mask generation, to identify the object-dedicated low frequency area in the DCT-domain that is to be preserved. We redefine the low frequency preserving region of the filter mask at every gap-filling iteration, and the region verges on the property of the original image in the DCT domain. The previous DCT2 mask for each phantom case had been manually well optimized, and the results show little difference from the reference image and sinogram. We observed little or no difference between the results of the manually optimized DCT2 algorithm and those of the proposed algorithm. The proposed algorithm works well for various types of scanning object and shows results that compare to those of the manually optimized DCT2 algorithm without perfect or full information of the imaging object.

  19. Supranormal orientation selectivity of visual neurons in orientation-restricted animals.

    PubMed

    Sasaki, Kota S; Kimura, Rui; Ninomiya, Taihei; Tabuchi, Yuka; Tanaka, Hiroki; Fukui, Masayuki; Asada, Yusuke C; Arai, Toshiya; Inagaki, Mikio; Nakazono, Takayuki; Baba, Mika; Kato, Daisuke; Nishimoto, Shinji; Sanada, Takahisa M; Tani, Toshiki; Imamura, Kazuyuki; Tanaka, Shigeru; Ohzawa, Izumi

    2015-11-16

    Altered sensory experience in early life often leads to remarkable adaptations so that humans and animals can make the best use of the available information in a particular environment. By restricting visual input to a limited range of orientations in young animals, this investigation shows that stimulus selectivity, e.g., the sharpness of tuning of single neurons in the primary visual cortex, is modified to match a particular environment. Specifically, neurons tuned to an experienced orientation in orientation-restricted animals show sharper orientation tuning than neurons in normal animals, whereas the opposite was true for neurons tuned to non-experienced orientations. This sharpened tuning appears to be due to elongated receptive fields. Our results demonstrate that restricted sensory experiences can sculpt the supranormal functions of single neurons tailored for a particular environment. The above findings, in addition to the minimal population response to orientations close to the experienced one, agree with the predictions of a sparse coding hypothesis in which information is represented efficiently by a small number of activated neurons. This suggests that early brain areas adopt an efficient strategy for coding information even when animals are raised in a severely limited visual environment where sensory inputs have an unnatural statistical structure.

  20. Supranormal orientation selectivity of visual neurons in orientation-restricted animals

    PubMed Central

    Sasaki, Kota S.; Kimura, Rui; Ninomiya, Taihei; Tabuchi, Yuka; Tanaka, Hiroki; Fukui, Masayuki; Asada, Yusuke C.; Arai, Toshiya; Inagaki, Mikio; Nakazono, Takayuki; Baba, Mika; Kato, Daisuke; Nishimoto, Shinji; Sanada, Takahisa M.; Tani, Toshiki; Imamura, Kazuyuki; Tanaka, Shigeru; Ohzawa, Izumi

    2015-01-01

    Altered sensory experience in early life often leads to remarkable adaptations so that humans and animals can make the best use of the available information in a particular environment. By restricting visual input to a limited range of orientations in young animals, this investigation shows that stimulus selectivity, e.g., the sharpness of tuning of single neurons in the primary visual cortex, is modified to match a particular environment. Specifically, neurons tuned to an experienced orientation in orientation-restricted animals show sharper orientation tuning than neurons in normal animals, whereas the opposite was true for neurons tuned to non-experienced orientations. This sharpened tuning appears to be due to elongated receptive fields. Our results demonstrate that restricted sensory experiences can sculpt the supranormal functions of single neurons tailored for a particular environment. The above findings, in addition to the minimal population response to orientations close to the experienced one, agree with the predictions of a sparse coding hypothesis in which information is represented efficiently by a small number of activated neurons. This suggests that early brain areas adopt an efficient strategy for coding information even when animals are raised in a severely limited visual environment where sensory inputs have an unnatural statistical structure. PMID:26567927

  1. The Vestibular System and Human Dynamic Space Orientation

    NASA Technical Reports Server (NTRS)

    Meiry, J. L.

    1966-01-01

    The motion sensors of the vestibular system are studied to determine their role in human dynamic space orientation and manual vehicle control. The investigation yielded control models for the sensors, descriptions of the subsystems for eye stabilization, and demonstrations of the effects of motion cues on closed loop manual control. Experiments on the abilities of subjects to perceive a variety of linear motions provided data on the dynamic characteristics of the otoliths, the linear motion sensors. Angular acceleration threshold measurements supplemented knowledge of the semicircular canals, the angular motion sensors. Mathematical models are presented to describe the known control characteristics of the vestibular sensors, relating subjective perception of motion to objective motion of a vehicle. The vestibular system, the neck rotation proprioceptors and the visual system form part of the control system which maintains the eye stationary relative to a target or a reference. The contribution of each of these systems was identified through experiments involving head and body rotations about a vertical axis. Compensatory eye movements in response to neck rotation were demonstrated and their dynamic characteristics described by a lag-lead model. The eye motions attributable to neck rotations and vestibular stimulation obey superposition when both systems are active. Human operator compensatory tracking is investigated in simple vehicle orientation control system with stable and unstable controlled elements. Control of vehicle orientation to a reference is simulated in three modes: visual, motion and combined. Motion cues sensed by the vestibular system through tactile sensation enable the operator to generate more lead compensation than in fixed base simulation with only visual input. The tracking performance of the human in an unstable control system near the limits of controllability is shown to depend heavily upon the rate information provided by the vestibular sensors.

  2. Crowding with conjunctions of simple features.

    PubMed

    Põder, Endel; Wagemans, Johan

    2007-11-20

    Several recent studies have related crowding with the feature integration stage in visual processing. In order to understand the mechanisms involved in this stage, it is important to use stimuli that have several features to integrate, and these features should be clearly defined and measurable. In this study, Gabor patches were used as target and distractor stimuli. The stimuli differed in three dimensions: spatial frequency, orientation, and color. A group of 3, 5, or 7 objects was presented briefly at 4 deg eccentricity of the visual field. The observers' task was to identify the object located in the center of the group. A strong effect of the number of distractors was observed, consistent with various spatial pooling models. The analysis of incorrect responses revealed that these were a mix of feature errors and mislocalizations of the target object. Feature errors were not purely random, but biased by the features of distractors. We propose a simple feature integration model that predicts most of the observed regularities.

  3. Human-like object tracking and gaze estimation with PKD android

    PubMed Central

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K; Bugnariu, Nicoleta L.; Popa, Dan O.

    2018-01-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold : to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans. PMID:29416193

  4. Human-like object tracking and gaze estimation with PKD android

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  5. Analysis of matters associated with the transferring of object-oriented applications to platform .Net using C# programming language

    NASA Astrophysics Data System (ADS)

    Sarsimbayeva, S. M.; Kospanova, K. K.

    2015-11-01

    The article provides the discussion of matters associated with the problems of transferring of object-oriented Windows applications from C++ programming language to .Net platform using C# programming language. C++ has always been considered to be the best language for the software development, but the implicit mistakes that come along with the tool may lead to infinite memory leaks and other errors. The platform .Net and the C#, made by Microsoft, are the solutions to the issues mentioned above. The world economy and production are highly demanding applications developed by C++, but the new language with its stability and transferability to .Net will bring many advantages. An example can be presented using the applications that imitate the work of queuing systems. Authors solved the problem of transferring of an application, imitating seaport works, from C++ to the platform .Net using C# in the scope of Visual Studio.

  6. Astronomical Simulations Using Visual Python

    NASA Astrophysics Data System (ADS)

    Cobb, Michael L.

    2007-05-01

    The Physics and Engineering Physics Department at Southeast Missouri State University has adopted the “Matter and Interactions I Modern Mechanics” text by Chabay and Sherwood for our calculus based introductory physics course. We have fully integrated the use of modeling and simulations by using the Visual Python language also know as VPython. This powerful, high level, object orientated language with full three dimensional, stereo graphics has stimulated both my students and myself to find wider applications for our new found skills. We have successfully modeled gravitational resonances in planetary rings, galaxy collisions, and planetary orbits around binary star systems. This talk will provide a quick overview of VPython and demonstrate the various simulations.

  7. Human gamma band activity and perception of a gestalt.

    PubMed

    Keil, A; Müller, M M; Ray, W J; Gruber, T; Elbert, T

    1999-08-15

    Neuronal oscillations in the gamma band (above 30 Hz) have been proposed to be a possible mechanism for the visual representation of objects. The present study examined the topography of gamma band spectral power and event-related potentials in human EEG associated with perceptual switching effected by rotating ambiguous (bistable) figures. Eleven healthy human subjects were presented two rotating bistable figures: first, a face figure that allowed perception of a sad or happy face depending on orientation and therefore caused a perceptual switch at defined points in time when rotated, and, second, a modified version of the Rubin vase, allowing perception as a vase or two faces whereby the switch was orientation-independent. Nonrotating figures served as further control stimuli. EEG was recorded using a high-density array with 128 electrodes. We found a negative event-related potential associated with the switching of the sad-happy figure, which was most pronounced at central prefrontal sites. Gamma band activity (GBA) was enhanced at occipital electrode sites in the rotating bistable figures compared with the standing stimuli, being maximal at vertical stimulus orientations that allowed an easy recognition of the sad and happy face or the vase-faces, respectively. At anterior electrodes, GBA showed a complementary pattern, being maximal when stimuli were oriented horizontally. The findings support the notion that formation of a visual percept may involve oscillations in a distributed neuronal assembly.

  8. Living Liquid: Design and Evaluation of an Exploratory Visualization Tool for Museum Visitors.

    PubMed

    Ma, J; Liao, I; Ma, Kwan-Liu; Frazier, J

    2012-12-01

    Interactive visualizations can allow science museum visitors to explore new worlds by seeing and interacting with scientific data. However, designing interactive visualizations for informal learning environments, such as museums, presents several challenges. First, visualizations must engage visitors on a personal level. Second, visitors often lack the background to interpret visualizations of scientific data. Third, visitors have very limited time at individual exhibits in museums. This paper examines these design considerations through the iterative development and evaluation of an interactive exhibit as a visualization tool that gives museumgoers access to scientific data generated and used by researchers. The exhibit prototype, Living Liquid, encourages visitors to ask and answer their own questions while exploring the time-varying global distribution of simulated marine microbes using a touchscreen interface. Iterative development proceeded through three rounds of formative evaluations using think-aloud protocols and interviews, each round informing a key visualization design decision: (1) what to visualize to initiate inquiry, (2) how to link data at the microscopic scale to global patterns, and (3) how to include additional data that allows visitors to pursue their own questions. Data from visitor evaluations suggests that, when designing visualizations for public audiences, one should (1) avoid distracting visitors from data that they should explore, (2) incorporate background information into the visualization, (3) favor understandability over scientific accuracy, and (4) layer data accessibility to structure inquiry. Lessons learned from this case study add to our growing understanding of how to use visualizations to actively engage learners with scientific data.

  9. Integration trumps selection in object recognition.

    PubMed

    Saarela, Toni P; Landy, Michael S

    2015-03-30

    Finding and recognizing objects is a fundamental task of vision. Objects can be defined by several "cues" (color, luminance, texture, etc.), and humans can integrate sensory cues to improve detection and recognition [1-3]. Cortical mechanisms fuse information from multiple cues [4], and shape-selective neural mechanisms can display cue invariance by responding to a given shape independent of the visual cue defining it [5-8]. Selective attention, in contrast, improves recognition by isolating a subset of the visual information [9]. Humans can select single features (red or vertical) within a perceptual dimension (color or orientation), giving faster and more accurate responses to items having the attended feature [10, 11]. Attention elevates neural responses and sharpens neural tuning to the attended feature, as shown by studies in psychophysics and modeling [11, 12], imaging [13-16], and single-cell and neural population recordings [17, 18]. Besides single features, attention can select whole objects [19-21]. Objects are among the suggested "units" of attention because attention to a single feature of an object causes the selection of all of its features [19-21]. Here, we pit integration against attentional selection in object recognition. We find, first, that humans can integrate information near optimally from several perceptual dimensions (color, texture, luminance) to improve recognition. They cannot, however, isolate a single dimension even when the other dimensions provide task-irrelevant, potentially conflicting information. For object recognition, it appears that there is mandatory integration of information from multiple dimensions of visual experience. The advantage afforded by this integration, however, comes at the expense of attentional selection. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Integration trumps selection in object recognition

    PubMed Central

    Saarela, Toni P.; Landy, Michael S.

    2015-01-01

    Summary Finding and recognizing objects is a fundamental task of vision. Objects can be defined by several “cues” (color, luminance, texture etc.), and humans can integrate sensory cues to improve detection and recognition [1–3]. Cortical mechanisms fuse information from multiple cues [4], and shape-selective neural mechanisms can display cue-invariance by responding to a given shape independent of the visual cue defining it [5–8]. Selective attention, in contrast, improves recognition by isolating a subset of the visual information [9]. Humans can select single features (red or vertical) within a perceptual dimension (color or orientation), giving faster and more accurate responses to items having the attended feature [10,11]. Attention elevates neural responses and sharpens neural tuning to the attended feature, as shown by studies in psychophysics and modeling [11,12], imaging [13–16], and single-cell and neural population recordings [17,18]. Besides single features, attention can select whole objects [19–21]. Objects are among the suggested “units” of attention because attention to a single feature of an object causes the selection of all of its features [19–21]. Here, we pit integration against attentional selection in object recognition. We find, first, that humans can integrate information near-optimally from several perceptual dimensions (color, texture, luminance) to improve recognition. They cannot, however, isolate a single dimension even when the other dimensions provide task-irrelevant, potentially conflicting information. For object recognition, it appears that there is mandatory integration of information from multiple dimensions of visual experience. The advantage afforded by this integration, however, comes at the expense of attentional selection. PMID:25802154

  11. Visualization on the Day Night Year Globe

    NASA Astrophysics Data System (ADS)

    Božić, Mirjana; Vušković, Leposava; Popović, Svetozar; Popović, Jelena; Marković-Topalović, Tatjana

    2016-11-01

    The story about a properly oriented outdoor globe in the hands and minds of Eratosthenes, Jefferson, Milanković and science educators is presented. Having the same orientation in space as the Earth, the Day Night Year Globe (DING) shows in real time the pattern of illumination of the Earth’s surface and its diurnal and seasonal variations. It is an ideal object for the visualization of knowledge and increase in knowledge about: the form of the Earth, Earth’s rotation, Earth’s revolution around the Sun, the length of seasons, solstices, equinoxes, the longitude problem, the distribution of the Sun’s radiation over the Earth, the impact of this radiation on Earth’s climate, and how to use it efficiently. By attaching a movable vane to the poles, or adding pins around the equator to read time, DING becomes a spherical/globe-shaped sundial. So, the DING is simultaneously useful for teaching physics, geophysics, astronomy, use of solar energy and promoting an inquiry-based learning environment for students and the public.

  12. Magnocellular pathway for rotation invariant Neocognitron.

    PubMed

    Ting, C H

    1993-03-01

    In the mammalian visual system, magnocellular pathway and parvocellular pathway cooperatively process visual information in parallel. The magnocellular pathway is more global and less particular about the details while the parvocellular pathway recognizes objects based on the local features. In many aspects, Neocognitron may be regarded as the artificial analogue of the parvocellular pathway. It is interesting then to model the magnocellular pathway. In order to achieve "rotation invariance" for Neocognitron, we propose a neural network model after the magnocellular pathway and expand its roles to include surmising the orientation of the input pattern prior to recognition. With the incorporation of the magnocellular pathway, a basic shift in the original paradigm has taken place. A pattern is now said to be recognized when and only when one of the winners of the magnocellular pathway is validified by the parvocellular pathway. We have implemented the magnocellular pathway coupled with Neocognitron parallel on transputers; our simulation programme is now able to recognize numerals in arbitrary orientation.

  13. A Foray into Laser Projection and the Visual Perception of Aircraft Aspect

    DTIC Science & Technology

    2002-04-01

    11. Ratcliff, R. (1993). Methods for dealing with reaction time outliers. Psychological Bulletin, 114(3), 510-532. 12. Macmillan, N.A. and Creelman ...life of these light sources adds to their attraction. More important to psychological concerns is their characteristic as sources of narrow-band...be related, say by a psychological theory that relates response times to the novel spatial orientations of familiar objects, the measures are

  14. Visual Processing of Object Velocity and Acceleration

    DTIC Science & Technology

    1991-12-13

    more recently, Dr. Grzywacz’s applications of filtering models to the psychophysics of speed discrimination; 3) the McKee-Welch studies on the...population of spatio-temporally oriented filters to encode velocity. Dr. Grzywacz has attempted to reconcile his model with a variety of psychophysical...by many authors.23 In these models , the image is tectors have different sizes and spatial positions, but they all spatially and temporally filtered

  15. Contextual effects on smooth-pursuit eye movements.

    PubMed

    Spering, Miriam; Gegenfurtner, Karl R

    2007-02-01

    Segregating a moving object from its visual context is particularly relevant for the control of smooth-pursuit eye movements. We examined the interaction between a moving object and a stationary or moving visual context to determine the role of the context motion signal in driving pursuit. Eye movements were recorded from human observers to a medium-contrast Gaussian dot that moved horizontally at constant velocity. A peripheral context consisted of two vertically oriented sinusoidal gratings, one above and one below the stimulus trajectory, that were either stationary or drifted into the same or opposite direction as that of the target at different velocities. We found that a stationary context impaired pursuit acceleration and velocity and prolonged pursuit latency. A drifting context enhanced pursuit performance, irrespective of its motion direction. This effect was modulated by context contrast and orientation. When a context was briefly perturbed to move faster or slower eye velocity changed accordingly, but only when the context was drifting along with the target. Perturbing a context into the direction orthogonal to target motion evoked a deviation of the eye opposite to the perturbation direction. We therefore provide evidence for the use of absolute and relative motion cues, or motion assimilation and motion contrast, for the control of smooth-pursuit eye movements.

  16. New technologies lead to a new frontier: cognitive multiple data representation

    NASA Astrophysics Data System (ADS)

    Buffat, S.; Liege, F.; Plantier, J.; Roumes, C.

    2005-05-01

    The increasing number and complexity of operational sensors (radar, infrared, hyperspectral...) and availability of huge amount of data, lead to more and more sophisticated information presentations. But one key element of the IMINT line cannot be improved beyond initial system specification: the operator.... In order to overcome this issue, we have to better understand human visual object representation. Object recognition theories in human vision balance between matching 2D templates representation with viewpoint-dependant information, and a viewpoint-invariant system based on structural description. Spatial frequency content is relevant due to early vision filtering. Orientation in depth is an important variable to challenge object constancy. Three objects, seen from three different points of view in a natural environment made the original images in this study. Test images were a combination of spatial frequency filtered original images and an additive contrast level of white noise. In the first experiment, the observer's task was a same versus different forced choice with spatial alternative. Test images had the same noise level in a presentation row. Discrimination threshold was determined by modifying the white noise contrast level by means of an adaptative method. In the second experiment, a repetition blindness paradigm was used to further investigate the viewpoint effect on object recognition. The results shed some light on the human visual system processing of objects displayed under different physical descriptions. This is an important achievement because targets which not always match physical properties of usual visual stimuli can increase operational workload.

  17. Saliency predicts change detection in pictures of natural scenes.

    PubMed

    Wright, Michael J

    2005-01-01

    It has been proposed that the visual system encodes the salience of objects in the visual field in an explicit two-dimensional map that guides visual selective attention. Experiments were conducted to determine whether salience measurements applied to regions of pictures of outdoor scenes could predict the detection of changes in those regions. To obtain a quantitative measure of change detection, observers located changes in pairs of colour pictures presented across an interstimulus interval (ISI). Salience measurements were then obtained from different observers for image change regions using three independent methods, and all were positively correlated with change detection. Factor analysis extracted a single saliency factor that accounted for 62% of the variance contained in the four measures. Finally, estimates of the magnitude of the image change in each picture pair were obtained, using nine separate visual filters representing low-level vision features (luminance, colour, spatial frequency, orientation, edge density). None of the feature outputs was significantly associated with change detection or saliency. On the other hand it was shown that high-level (structural) properties of the changed region were related to saliency and to change detection: objects were more salient than shadows and more detectable when changed.

  18. Visualizer: 3D Gridded Data Visualization Software for Geoscience Education and Research

    NASA Astrophysics Data System (ADS)

    Harwood, C.; Billen, M. I.; Kreylos, O.; Jadamec, M.; Sumner, D. Y.; Kellogg, L. H.; Hamann, B.

    2008-12-01

    In both research and education learning is an interactive and iterative process of exploring and analyzing data or model results. However, visualization software often presents challenges on the path to learning because it assumes the user already knows the locations and types of features of interest, instead of enabling flexible and intuitive examination of results. We present examples of research and teaching using the software, Visualizer, specifically designed to create an effective and intuitive environment for interactive, scientific analysis of 3D gridded data. Visualizer runs in a range of 3D virtual reality environments (e.g., GeoWall, ImmersaDesk, or CAVE), but also provides a similar level of real-time interactivity on a desktop computer. When using Visualizer in a 3D-enabled environment, the software allows the user to interact with the data images as real objects, grabbing, rotating or walking around the data to gain insight and perspective. On the desktop, simple features, such as a set of cross-bars marking the plane of the screen, provide extra 3D spatial cues that allow the user to more quickly understand geometric relationships within the data. This platform portability allows the user to more easily integrate research results into classroom demonstrations and exercises, while the interactivity provides an engaging environment for self-directed and inquiry-based learning by students. Visualizer software is freely available for download (www.keckcaves.org) and runs on Mac OSX and Linux platforms.

  19. Lasers' spectral and temporal profile can affect visual glare disability.

    PubMed

    Beer, Jeremy M A; Freeman, David A

    2012-12-01

    Experiments measured the effects of laser glare on visual orientation and motion perception. Laser stimuli were varied according to spectral composition and temporal presentation as subjects identified targets' tilt (Experiment 1) and movement (Experiment 2). The objective was to determine whether the glare parameters would alter visual disruption. Three spectral profiles (monochromatic Green vs. polychromatic White vs. alternating Red-Green) were used to produce a ring of laser glare surrounding a target. Two experiments were performed to measure the minimum contrast required to report target orientation or motion direction. The temporal glare profile was also varied: the ring was illuminated either continuously or discontinuously. Time-averaged luminance of the glare stimuli was matched across all conditions. In both experiments, threshold (deltaL) values were approximately 0.15 log units higher in monochromatic Green than in polychromatic White conditions. In Experiment 2 (motion identification), thresholds were approximately 0.17 log units higher in rapidly flashing (6, 10, or 14 Hz) than in continuous exposure conditions. Monochromatic extended-source laser glare disrupted orientation and motion identification more than polychromatic glare. In the motion task, pulse trains faster than 6 Hz (but below flicker fusion) elevated thresholds more than continuous glare with the same time-averaged luminance. Under these conditions, alternating the wavelength of monochromatic glare over time did not aggravate disability relative to green-only glare. Repetitively flashing monochromatic laser glare induced occasional episodes of impaired motion identification, perhaps resulting from cognitive interference. Interference speckle might play a role in aggravating monochromatic glare effects.

  20. Transcranial magnetic stimulation changes response selectivity of neurons in the visual cortex

    PubMed Central

    Kim, Taekjun; Allen, Elena A.; Pasley, Brian N.; Freeman, Ralph D.

    2015-01-01

    Background Transcranial magnetic stimulation (TMS) is used to selectively alter neuronal activity of specific regions in the cerebral cortex. TMS is reported to induce either transient disruption or enhancement of different neural functions. However, its effects on tuning properties of sensory neurons have not been studied quantitatively. Objective/Hypothesis Here, we use specific TMS application parameters to determine how they may alter tuning characteristics (orientation, spatial frequency, and contrast sensitivity) of single neurons in the cat’s visual cortex. Methods Single unit spikes were recorded with tungsten microelectrodes from the visual cortex of anesthetized and paralyzed cats (12 males). Repetitive TMS (4Hz, 4sec) was delivered with a 70mm figure-8 coil. We quantified basic tuning parameters of individual neurons for each pre- and post-TMS condition. The statistical significance of changes for each tuning parameter between the two conditions was evaluated with a Wilcoxon signed-rank test. Results We generally find long-lasting suppression which persists well beyond the stimulation period. Pre- and post-TMS orientation tuning curves show constant peak values. However, strong suppression at non-preferred orientations tends to narrow the widths of tuning curves. Spatial frequency tuning exhibits an asymmetric change in overall shape, which results in an emphasis on higher frequencies. Contrast tuning curves show nonlinear changes consistent with a gain control mechanism. Conclusions These findings suggest that TMS causes extended interruption of the balance between sub-cortical and intra-cortical inputs. PMID:25862599

Top